Adaptive Multi-Feature Extraction Graph Convolutional Networks for Multimodal Target Sentiment Analysis

2022 IEEE International Conference on Multimedia and Expo (ICME)(2022)

引用 1|浏览43
暂无评分
摘要
The multi-modal target-oriented sentiment analysis aims at predicting the sentiment polarities for target entities in a sentence by combining vision and language information. However, most existing deep learning approaches fail to extract valuable information from the visual modality and ignore the usability of syntactic dependency information embedded in the text modality. In this paper, we propose a two-stream adaptive multi-feature extraction graph convolutional networks (AME-GCN), which translates the image into a textual caption and dynamically fuses the semantic and syntactic feature from the given sentence and generated caption to model the inter/intra-modality dynamics. Extensive experiments on two multi-modal Twitter datasets show the effectiveness of the proposed model against popular textual and multi-modal approaches, demonstrating that AME-GCN is a best alternative for this task.
更多
查看译文
关键词
networks,multi-feature
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要