Multimodal Emotion Recognition Based on Feature Fusion

2022 International Conference on Advanced Robotics and Mechatronics (ICARM)(2022)

引用 0|浏览2
暂无评分
摘要
In the field of human-computer interaction, human emotion recognition is a challenging problem, and it is also a key link to achieve barrier-free communication between human and machine. At present, most of the emotion recognition algorithms are constructed based on single modal social information, and the recognition results are one-sided and easily disturbed. The recognition accuracy is often difficult to meet the practical requirements after being separated from specific social environment conditions. Based on the above situation and problems, this paper adopts multimodal input and simultaneously includes three modal information of audio, text and facial expression to recognition emotion. Three single modal emotion recognition models are proposed based on three different input information, and the multimodal emotion recognition model are constructed by different feature fusion methods. The experimental results showed that the accuracy of multimodal model on the CH-SIMS dataset was 93.92%. In addition, compared with other emotion recognition models, the effectiveness of the proposed method is verified.
更多
查看译文
关键词
emotion,fusion,feature,recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要