Improving Multimodal Classification of Social Media Posts by Leveraging Image-Text Auxiliary Tasks
Conference of the European Chapter of the Association for Computational Linguistics(2023)
摘要
Effectively leveraging multimodal information from social media posts is
essential to various downstream tasks such as sentiment analysis, sarcasm
detection or hate speech classification. Jointly modeling text and images is
challenging because cross-modal semantics might be hidden or the relation
between image and text is weak. However, prior work on multimodal
classification of social media posts has not yet addressed these challenges. In
this work, we present an extensive study on the effectiveness of using two
auxiliary losses jointly with the main task during fine-tuning multimodal
models. First, Image-Text Contrastive (ITC) is designed to minimize the
distance between image-text representations within a post, thereby effectively
bridging the gap between posts where the image plays an important role in
conveying the post's meaning. Second, Image-Text Matching (ITM) enhances the
model's ability to understand the semantic relationship between images and
text, thus improving its capacity to handle ambiguous or loosely related
modalities. We combine these objectives with five multimodal models across five
diverse social media datasets, demonstrating consistent improvements of up to
2.6 points F1. Our comprehensive analysis shows the specific scenarios where
each auxiliary task is most effective.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要