BERT-based Regression Model for Micro-edit Humor Classification Task

Yuancheng Chen,Yi Hou, Deqiang Ye, Yuehang Yu

2021 INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, INFORMATION AND COMMUNICATION ENGINEERING(2021)

引用 0|浏览1
暂无评分
摘要
Choosing the more humorous edited headline is a subfield of humor detection and generation tasks. This paper tries to deal with the second subtask of the SemEval-2020 shared task, "Assessing Humor in Edited News Headlines". It aims to determine how machines can understand humor generated by an atomic edit to the original headline and automatically pick up the funnier version among two different edits. Given that both substitute words on the same original text are scored using crowdsourcing, we attempt not only classification but also the regression model on this special task. As for the training process, we first consider using two different embedding approaches, including GloVe and BERT, then further use different forms of neural network such as a fully connected layer, BiLSTM, and GRU. According to the experimental results, our BERT-based model gets a 64% accuracy performance, ranking second in the competition over 50 teams. Furthermore, by comparing the result and performance of the above models, we pick up some classic wrongly predicted samples and analyze the potential reasons for future study. The experimental results illustrate that mainly the revised sentence accounts for edit humor, whereas the original sentence does not have any effect. Besides, the combination of the revised and original sentence as input receives the best output, which shows that edit humor is probably produced from the edited sentence and the difference before and after modification.
更多
查看译文
关键词
Micro-edit Humor, Humor Detection, BERT, NLP
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要