Vision-based Engagement Detection in Virtual Reality.

DMIAF(2016)

引用 8|浏览15
暂无评分
摘要
User engagement modeling for manipulating actions in vision-based interfaces is one of the most important case studies of user mental state detection. In a Virtual Reality environment that employs camera sensors to recognize human activities, we have to know were user intend to perform an action and when he/she is disengaged. Without a proper algorithm for recognizing engagement status, any kind of activities could be interpreted as manipulating actions, called “Midas Touch” problem. Baseline approach for solving this problem is activating gesture recognition system using some focus gestures such as waiving or raising hand. However, a desirable natural user interface should be able to understand useru0027s mental status automatically. In this paper, a novel multi-modal model for engagement detection, DAIA 1, is presented. using DAIA, the spectrum of mental status for performing an action is quantized in a finite number of engagement states. For this purpose, a Finite State Transducer (FST) is designed. This engagement framework shows how to integrate multi-modal information from user biometric data streams such as 2D and 3D imaging. FST is employed to make the state transition smoothly using combination of several boolean expressions. Our FST true detection rate is 92.3% in total for four different states. Results also show FST can segment user hand gestures more robustly.
更多
查看译文
关键词
Gesture Recognition Systems,User Engagement Detection,Human Activity Recognition,Vision-based Interface,Virtual Reality,Finite State Machine
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要