Subjective Answer Evaluation

  • Unique Paper ID: 169294
  • Volume: 11
  • Issue: 6
  • PageNo: 626-628
  • Abstract:
  • Evaluating subjective papers manually is often tedious, time-consuming, and inconsistent. Unlike objective tests, subjective answers require in-depth analysis as they are open-ended and vary significantly in structure and length. Traditional automated grading approaches, such as keyword matching, often fail to capture the full meaning and context of responses. This project addresses these challenges by leveraging machine learning and natural language processing (NLP) to automate subjective answer evaluation. We employ advanced techniques like Word2Vec, cosine similarity, Word Mover’s Distance (WMD), Naive Bayes, and BERT for deeper contextual understanding. This system evaluates answers based on content and semantics, improving grading accuracy and fairness while reducing time and effort.

Cite This Article

  • ISSN: 2349-6002
  • Volume: 11
  • Issue: 6
  • PageNo: 626-628

Subjective Answer Evaluation

Related Articles