Multimodal information fusion method in emotion recognition in the background of artificial intelligence

INTERNET TECHNOLOGY LETTERS(2024)

引用 0|浏览0
暂无评分
摘要
Recent advances in Semantic IoT data integration have highlighted the importance of multimodal fusion in emotion recognition systems. Human emotions, formed through innate learning and communication, are often revealed through speech and facial expressions. In response, this study proposes a hidden Markov model-based multimodal fusion emotion detection system, combining speech recognition with facial expressions to enhance emotion recognition rates. The integration of such emotion recognition systems with Semantic IoT data can offer unprecedented insights into human behavior and sentiment analysis, contributing to the advancement of data integration techniques in the context of the Internet of Things. Experimental findings indicate that in single-modal emotion detection, speech recognition achieves a 76% accuracy rate, while facial expression recognition achieves 78%. However, when state information fusion is applied, the recognition rate increases to 95%, surpassing the national average by 19% and 17% for speech and facial expressions, respectively. This demonstrates the effectiveness of multimodal fusion in emotion recognition, leading to higher recognition rates and reduced workload compared to single-modal approaches.
更多
查看译文
关键词
emotion recognition,internet of things,multimodal information fusion,unimodal information fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要