SCANET: Improving multimodal representation and fusion with sparse- and cross-attention for multimodal sentiment analysis

Hao Wang,Mingchuan Yang, Zheng Li, Zhenhua Liu, Jie Hu,Ziwang Fu,Feng Liu

COMPUTER ANIMATION AND VIRTUAL WORLDS(2022)

引用 5|浏览15
暂无评分
摘要
Learning unimodal representations and improving multimodal fusion are two cores of multimodal sentiment analysis (MSA). However, previous methods ignore the information differences between different modalities: Text modality has high-order semantic features than other modalities. In this article, we propose a sparse- and cross-attention (SCANET) framework which has asymmetric architecture to improve performance of multimodal representation and fusion. Specifically, in the unimodal representation stage, we use sparse attention to improve the representation efficiency of two modalities and reduce the low-order redundant features of audio and visual modalities. In the multimodal fusion stage, we design an innovative asymmetric fusion module, which utilizes audio and visual modality information matrix as weights to strengthen the target text modality. We also introduce contrastive learning to effectively enhance complementary features between modalities. We apply SCANET on the CMU-MOSI and CMU-MOSEI datasets, and experimental results show that our proposed method achieves state-of-the-art performance.
更多
查看译文
关键词
cross-modal attention, multimodal fusion, multimodal sentiment analysis, sparse transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要