Data Augmentation by Rubrics for Short Answer Grading

Journal of Natural Language Processing(2021)

Cited 3|Views4
No score
Abstract
Short Answer Grading (SAG) is the task of scoring students' answers for applications such as examinations or e-learning. Most of the existing SAG systems predict scores based only on the answers, and critical evaluation criteria such as rubrics are ignored, which plays a crucial role in evaluating answers in real-world situations. In this paper, we propose a semi-supervised method to train a neural SAG model. We extract keyphrases that are highly related to answers scores from rubrics. Weights to words of answers are calculated as attention labels instead of manually annotated attention labels, based on span-wise alignments between answers and keyphrases. Only answers with highly weighed words are used as attention supervision. We evaluate the proposed model on two analytical assessment tasks of analytic score prediction and justification identification. Analytic score prediction is the task of predicting the score of a given answer for a prompt, and Justification identification involves identifying a justification cue in a given student answer for each analytic score. Our experimental results demonstrate that both performance of grading and justification identification is improved by integrating attention semi-supervised training, especially in a low-resource setting.
More
Translated text
Key words
Student Performance Prediction,Adaptive Learning,Predictive Analysis,Topic Modeling,Educational Data Mining
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined