Trusting Language Models in Education

Jogi Suda Neto,Li Deng, Thejaswi Raya, Reza Shahbazi, Nick Liu,Adhitya Venkatesh, Miral Shah,Neeru Khosla,Rodrigo Capobianco Guido

CoRR(2023)

引用 0|浏览11
暂无评分
摘要
Language Models are being widely used in Education. Even though modern deep learning models achieve very good performance on question-answering tasks, sometimes they make errors. To avoid misleading students by showing wrong answers, it is important to calibrate the confidence - that is, the prediction probability - of these models. In our work, we propose to use an XGBoost on top of BERT to output the corrected probabilities, using features based on the attention mechanism. Our hypothesis is that the level of uncertainty contained in the flow of attention is related to the quality of the model's response itself.
更多
查看译文
关键词
language models,education
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要