Video sentiment analysis with bimodal information-augmented multi-head attention

Knowledge-Based Systems(2022)

引用 21|浏览160
暂无评分
摘要
Humans express feelings or emotions via different channels. Take language as an example, it entails different sentiments under different visual–acoustic contexts. To precisely understand human intentions as well as reduce the misunderstandings caused by ambiguity and sarcasm, we should consider multimodal signals including textual, visual and acoustic signals. The crucial challenge is to fuse different modalities of features for sentiment analysis. To effectively fuse the information carried by different modalities and better predict the sentiments, we design a novel multi-head attention based fusion network, which is inspired by the observations that the interactions between any two pair-wise modalities are different and they do not equally contribute to the final sentiment prediction. By assigning the acoustic–visual, acoustic–textual and visual–textual features with reasonable attention and exploiting a residual structure, we attend to attain the significant features. We conduct extensive experiments on four public multimodal datasets including one in Chinese and three in English. The results show that our approach outperforms the existing methods and can explain the contributions of bimodal interaction in multiple modalities.
更多
查看译文
关键词
Information fusion,Multi-head attention,Multimodality,Sentiment analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要