CognitiveWorkload Assessment via Eye Gaze and EEG in an Interactive Multi-Modal Driving Task

Multimodal Interfaces and Machine Learning for Multimodal Interaction(2022)

引用 2|浏览16
暂无评分
摘要
Assessing the cognitive workload of human interactants in mixed-initiative teams is a critical capability for autonomous interactive systems to enable adaptations that improve team performance. Yet, it is still unclear, due to diverging evidence, which sensing modality might work best for the determination of human workload. In this paper, we report results from an empirical study that was designed to answer this question by collecting eye gaze and electroencephalogram (EEG) data from human subjects performing an interactive multi-modal driving task. Different levels of cognitive workload were generated by introducing secondary tasks like dialogue, braking events, and tactile stimulation in the course of driving. Our results show that pupil diameter is a more reliable indicator for workload prediction than EEG. And more importantly, none of the five different machine learning models combining the extracted EEG and pupil diameter features were able to show any improvement in workload classification over eye gaze alone, suggesting that eye gaze is a sufficient modality for assessing human cognitive workload in interactive, multi-modal, multi-task settings.
更多
查看译文
关键词
cognitive workload classification, pupillometry, eye gaze, EEG, multi-modality learning, autonomous interactive systems, mixed-initiative teams, artificial agents
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要