AMuSE: Adaptive Multimodal Analysis for Speaker Emotion Recognition in Group Conversations
2023 IEEE Ninth Multimedia Big Data (BigMM)(2024)
摘要
Analyzing individual emotions during group conversation is crucial in
developing intelligent agents capable of natural human-machine interaction.
While reliable emotion recognition techniques depend on different modalities
(text, audio, video), the inherent heterogeneity between these modalities and
the dynamic cross-modal interactions influenced by an individual's unique
behavioral patterns make the task of emotion recognition very challenging. This
difficulty is compounded in group settings, where the emotion and its temporal
evolution are not only influenced by the individual but also by external
contexts like audience reaction and context of the ongoing conversation. To
meet this challenge, we propose a Multimodal Attention Network that captures
cross-modal interactions at various levels of spatial abstraction by jointly
learning its interactive bunch of mode-specific Peripheral and Central
networks. The proposed MAN injects cross-modal attention via its Peripheral
key-value pairs within each layer of a mode-specific Central query network. The
resulting cross-attended mode-specific descriptors are then combined using an
Adaptive Fusion technique that enables the model to integrate the
discriminative and complementary mode-specific data patterns within an
instance-specific multimodal descriptor. Given a dialogue represented by a
sequence of utterances, the proposed AMuSE model condenses both spatial and
temporal features into two dense descriptors: speaker-level and
utterance-level. This helps not only in delivering better classification
performance (3-5
in large-scale public datasets but also helps the users in understanding the
reasoning behind each emotion prediction made by the model via its Multimodal
Explainability Visualization module.
更多查看译文
关键词
Emotion Recognition,Multimodal Sensing,Supervised Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要