Chrome Extension
WeChat Mini Program
Use on ChatGLM

Transformer-based adaptive contrastive learning for multimodal sentiment analysis

Yifan Hu, Xi Huang,Xianbing Wang, Hai Lin,Rong Zhang

Multimedia Tools and Applications(2024)

Cited 0|Views10
No score
Abstract
Multimodal Sentiment Analysis (MSA) plays a crucial role in discerning and analyzing the diverse attitudes and opinions of Internet users across various social media platforms. Nevertheless, existing sentiment analysis methods exhibit a text-centric bias, with the textual modality taking a predominant role in multimodal scenarios. In complex contexts, different unimodal information may exhibit inconsistent emotional tendencies, making it challenging to detect hidden sentiments such as ambiguity or irony when focusing primarily on the textual modality. To address these issues, we propose a Transformer-based multimodal modality enhancement network (MMEN) suitable for complex contexts. The primary objective is to emphasize shared information between modalities in multimodal fusion, encouraging modal synergy and avoiding the neglect of weaker modalities. Specifically, we employ multi-head attention mechanism for both unimodal feature extraction and multimodal fusion to obtain latent representations for multimodal sentiment analysis. Furthermore, an adaptive contrastive learning module is designed, utilizing fine-grained sentiment information to retain crucial emotional cues from unimodal sources and enhance semantic fusion. We introduce multi-task learning to dynamically adjust modal contributions. Experimental results on the public dataset CH-SIMSv2.0 demonstrate that the proposed model outperforms the baseline model in terms of accuracy and F1 score by 2.51
More
Translated text
Key words
Multimodal sentiment analysis,Transformer,Multimodal fusion,Modality enhancement,Contrastive learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined