Multi-Modal Hierarchical Empathetic Framework for Social Robots With Affective Body Control

IEEE Transactions on Affective Computing(2024)

引用 0|浏览1
暂无评分
摘要
Social robots require the ability to understand human emotions and provide affective and behavioral responses during human-robot interactions. However, current social robots lack empathy capabilities. In this work, we propose a novel Multi-modal Hierarchical Empathetic (MHE) framework for generating empathetic responses for social robots. MHE is composed of a multi-modal fusion and emotion recognition module, an empathetic dialogue generation module, and an expression generation module. By fusing the sensor signals of different modalities, the robot can recognize human emotions and generate affective responses. Multiple experiments are conducted on a real robot, Pepper, to evaluate the proposed framework. The experiments are conducted to discriminate between MHE-generated text and human responses in complete ignorance, and most experimenters agree that MHE can effectively generate human-like and empathetic responses. To better evaluate the similarity between human-robot and human-human interactions, a period eye movement map (PEM) captured by an eye tracker is proposed. The experimental results demonstrate the improvement in the MHE in human-robot interactions by comparing different PEMs.
更多
查看译文
关键词
Hierarchical model,human-robot interaction,multi-modal empathetic response,social robot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要