Visual Perception Based Engagement Awareness For Multiparty Human-Robot Interaction

INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS(2015)

引用 2|浏览27
暂无评分
摘要
Computational systems for human-robot interaction (HRI) could benefit from visual perceptions of social cues that are commonly employed in human-human interactions. However, existing systems focus on one or two cues for attention or intention estimation. This research investigates how social robots may exploit a wide spectrum of visual cues for multiparty interactions. It is proposed that the vision system for social cue perception should be supported by two dimensions of functionality, namely, vision functionality and cognitive functionality. A vision-based system is proposed for a robot receptionist to embrace both functionalities for multiparty interactions. The module of vision functionality consists of a suite of methods that computationally recognize potential visual cues related to social behavior understanding. The performance of the models is validated by the ground truth annotation dataset. The module of cognitive functionality consists of two computational models that (1) quantify users' attention saliency and engagement intentions, and (2) facilitate engagement-aware behaviors for the robot to adjust its direction of attention and manage the conversational floor. The performance of the robot's engagement-aware behaviors is evaluated in a multiparty dialog scenario. The results show that the robot's engagement-aware behavior based on visual perceptions significantly improve the effectiveness of communication and positively affect user experience.
更多
查看译文
关键词
Human-robot interaction, social robots, nonverbal cues, robot vision, attention, intention, engagement, multiparty conversation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要