How to Solve Few-Shot Abusive Content Detection Using the Data We Actually Have
arxiv(2023)
摘要
Due to the broad range of social media platforms, the requirements of abusive
language detection systems are varied and ever-changing. Already a large set of
annotated corpora with different properties and label sets were created, such
as hate or misogyny detection, but the form and targets of abusive speech are
constantly evolving. Since, the annotation of new corpora is expensive, in this
work we leverage datasets we already have, covering a wide range of tasks
related to abusive language detection. Our goal is to build models cheaply for
a new target label set and/or language, using only a few training examples of
the target domain. We propose a two-step approach: first we train our model in
a multitask fashion. We then carry out few-shot adaptation to the target
requirements. Our experiments show that using already existing datasets and
only a few-shots of the target task the performance of models improve both
monolingually and across languages. Our analysis also shows that our models
acquire a general understanding of abusive language, since they improve the
prediction of labels which are present only in the target dataset and can
benefit from knowledge about labels which are not directly used for the target
task.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要