Intelligent Feedback on Hypothesis Testing

INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION(2020)

引用 1|浏览18
暂无评分
摘要
Hypothesis testing involves a complex stepwise procedure that is challenging for many students in introductory university statistics courses. In this paper we assess how feedback from an Intelligent Tutoring System can address the logic of hypothesis testing and whether such feedback contributes to first-year social sciences students’ proficiency in carrying out hypothesis tests. Feedback design combined elements of the model-tracing and constraint-based modeling paradigms, to address both the individual steps as well as the relations between steps. To evaluate the feedback, students in an experimental group ( N = 163) received the designed intelligent feedback in six hypothesis-testing construction tasks, while students in a control group ( N = 151) only received stepwise verification feedback in these tasks. Results showed that students receiving intelligent feedback spent more time on the tasks, solved more tasks and made fewer errors than students receiving only verification feedback. These positive results did not transfer to follow-up tasks, which might be a consequence of the isolated nature of these tasks. We conclude that the designed feedback may support students in learning to solve hypothesis-testing construction tasks independently and that it facilitates the creation of more hypothesis-testing construction tasks.
更多
查看译文
关键词
Feedback, Hypothesis testing, Intelligent tutoring systems, Statistics education
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要