Using After-Action Review Based on Automated Performance Assessment to Enhance Training Effectiveness

Proceedings of the Human Factors and Ergonomics Society Annual Meeting(2010)

引用 7|浏览2
暂无评分
摘要
Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. In this work, we follow-up on previous evaluations of the Automated Expert Modeling and Automated Student Evaluation (AEMASE) system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three domain-specific performance metrics.
更多
查看译文
关键词
human factors engineering,human factors,performance,simulation,feedback,simulators,metrics,evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要