Mixture of Attention Variants for Modal Fusion in Multi-Modal Sentiment Analysis

Chao He, Xinghua Zhang,Dongqing Song,Yingshan Shen, Chengjie Mao, Huosheng Wen,Dingju Zhu,Lihua Cai

BIG DATA AND COGNITIVE COMPUTING(2024)

引用 0|浏览15
暂无评分
摘要
With the popularization of better network access and the penetration of personal smartphones in today's world, the explosion of multi-modal data, particularly opinionated video messages, has created urgent demands and immense opportunities for Multi-Modal Sentiment Analysis (MSA). Deep learning with the attention mechanism has served as the foundation technique for most state-of-the-art MSA models due to its ability to learn complex inter- and intra-relationships among different modalities embedded in video messages, both temporally and spatially. However, modal fusion is still a major challenge due to the vast feature space created by the interactions among different data modalities. To address the modal fusion challenge, we propose an MSA algorithm based on deep learning and the attention mechanism, namely the Mixture of Attention Variants for Modal Fusion (MAVMF). The MAVMF algorithm includes a two-stage process: in stage one, self-attention is applied to effectively extract image and text features, and the dependency relationships in the context of video discourse are captured by a bidirectional gated recurrent neural module; in stage two, four multi-modal attention variants are leveraged to learn the emotional contributions of important features from different modalities. Our proposed approach is end-to-end and has been shown to achieve a superior performance to the state-of-the-art algorithms when tested with two largest public datasets, CMU-MOSI and CMU-MOSEI.
更多
查看译文
关键词
multi-modality,attention mechanism,sentiment analysis,feature fusion,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要