Large Language Models for In-Context Student Modeling: Synthesizing Student's Behavior in Visual Programming
CoRR(2023)
摘要
Student modeling is central to many educational technologies as it enables
predicting future learning outcomes and designing targeted instructional
strategies. However, open-ended learning domains pose challenges for accurately
modeling students due to the diverse behaviors and a large space of possible
misconceptions. To approach these challenges, we explore the application of
large language models (LLMs) for in-context student modeling in open-ended
learning domains. More concretely, given a particular student's attempt on a
reference task as observation, the objective is to synthesize the student's
attempt on a target task. We introduce a novel framework, LLM for Student
Synthesis (LLM-SS), that leverages LLMs for synthesizing a student's behavior.
Our framework can be combined with different LLMs; moreover, we fine-tune LLMs
to boost their student modeling capabilities. We instantiate several methods
based on LLM-SS framework and evaluate them using an existing benchmark,
StudentSyn, for student attempt synthesis in a visual programming domain.
Experimental results show that our methods perform significantly better than
the baseline method NeurSS provided in the StudentSyn benchmark. Furthermore,
our method using a fine-tuned version of the GPT-3.5 model is significantly
better than using the base GPT-3.5 model and gets close to human tutors'
performance.
更多查看译文
关键词
visual programming,large language models,language models,in-context,one-shot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要