MMT-GD: Multi-Modal Transformer with Graph Distillation for Cross-Cultural Humor Detection

MuSe '23: Proceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation(2023)

Cited 0|Views17
No score
Abstract
In this paper, we present a solution for the Cross-Cultural Humor Detection (MuSe-Humor) sub-challenge, which is part of the Multimodal Sentiment Analys Challenge (MuSe) 2023. The MuSe-Humor task aims to detect humor from multimodal data, including video, audio, and text, in a cross-cultural context. The training data consists of German recordings, while the test data consists of English recordings. To tackle this sub-challenge, we propose a method called MMT-GD, which leverages a multimodal transformer model to effectively integrate the multimodal data. Additionally, we incorporate graph distillation to ensure that the fusion process captures discriminative features from each modality, avoiding excessive reliance on any single modality. Experimental results validate the effectiveness of our approach, achieving an Area Under the Curve (AUC) score of 0.8704 on the test set and securing the third position in the challenge.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined