Automatic Subjective Answer Evaluation

  • Unique Paper ID: 176808
  • Volume: 11
  • Issue: 11
  • PageNo: 6268-6271
  • Abstract:
  • Subjective questions and responses provide an open-ended assessment of a student’s understanding, allowing them to express their knowledge in a personalized and conceptual manner. However, the manual evaluation of such answers is often time-consuming, inconsistent, and prone to bias. This project proposes an automated system for evaluating subjective answers using Machine Learning (ML) and Natural Language Processing (NLP) techniques. The system utilizes various NLP methods and models such as Word2Vec, WordNet, Word Mover’s Distance (WMD), Cosine Similarity, Term Frequency-Inverse Document Frequency (TF-IDF), and Multinomial Naive Bayes (MNB) to analyze and score answers. By comparing student responses to reference answers on the basis of semantic similarity and keyword relevance, the model predicts a score with high accuracy. The system aims to improve grading consistency, reduce evaluation time, and enhance the overall efficiency of academic assessments. Experimental results show that the WMD technique performs better than Cosine Similarity in maintaining semantic integrity, and with sufficient training, the machine learning model is capable of functioning autonomously.

Cite This Article

  • ISSN: 2349-6002
  • Volume: 11
  • Issue: 11
  • PageNo: 6268-6271

Automatic Subjective Answer Evaluation

Related Articles