Multi-Modal Sentiment Feature Learning Based On Sentiment Signal

12TH CHINESE CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING (CHINESECSCW 2017)(2017)

引用 0|浏览34
暂无评分
摘要
The multi-modal characteristic of social media content (e.g. texts and images) significantly challenges traditional text-based sentiment analysis approaches, multi-modal sentiment analysis gets great theoretical value for understanding and analysis of multi-modal contents. To bridge the sentiment semantic gap between modalities and sentiment, we propose an unsupervised multi-modal sentiment feature learning method to extract a kind of General Sentiment Presentation (GSP) for multi-modal sentiment analysis. We take advantage of the large scale sentiment signals in social media to guide the feature learning process. And we leverage deep learning methods to extract hierarchical sentiment feature in an unsupervised manner. Experimental results show 1) GSP is able to defeat the-state-of-art hand-craft sentiment features of different modalities; 2) GSP has fast convergence and high accuracy for sentiment classification; 3) GSP has strong generalization ability.
更多
查看译文
关键词
Sentiment signal, unsupervised feature learning, multi-modal sentiment analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要