Development of a practical system for computerized evaluation of descriptive answers of middle school level students

INTERACTIVE LEARNING ENVIRONMENTS(2022)

Cited 11|Views11
No score
Abstract
Assessment plays an important role in education. Recently proposed machine learning-based systems for answer grading demand a large training data which is not available in many application areas. Creation of sufficient training data is costly and time-consuming. As a result, automatic long answer grading is still a challenge. In this paper, we propose a practical system for long or descriptive answer grading that can assess in a small class scenario. The system uses an expert-written reference answer and computes the similarity of a student answer with it. For the similarity computation, it uses several word level and sentence level similarity measures including TFIDF, Latent Semantic Indexing, Latent Dirichlet Analysis, TextRank summarizer, and neural sentence embedding-based InferSent. The student answer might contain certain facts that do not occur in the model answer. The system identifies such sentences, examine their relevance and correctness, and assigns extra marks accordingly. In the final phase, the system uses a clustering-based confidence analysis. The system is tested on an assessment of school-level social science answer books. The experimental results demonstrate that the system evaluates the answer books with high accuracy, the best root mean square error value is 0.59 on a 0-5 scoring scale.
More
Translated text
Key words
Answer grading, educational assessment, automatic evaluation, student writing assessment, text similarity
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined