Identifying Implicitly Abusive Remarks about Identity Groups using a Linguistically Informed Approach

North American Chapter of the Association for Computational Linguistics (NAACL)(2022)

引用 8|浏览3
暂无评分
摘要
We address the task of distinguishing implicitly abusive sentences on identity groups (Muslims terrorize the world daily) from other grouprelated negative polar sentences (Muslims despise terrorism). Implicitly abusive language are utterances not conveyed by abusive words (e.g. bimbo or scum). So far, the detection of such utterances could not be properly addressed since existing datasets displaying a high degree of implicit abuse are fairly biased. Following the recently proposed strategy to solve implicit abuse by separately addressing its different subtypes, we present a new focused and less biased dataset that consists of the subtype of atomic negative sentences about identity groups. For that task, we model components that each address one facet of such implicit abuse, i.e. depiction as perpetrators, aspectual classification and non-conformist views. The approach generalizes across different identity groups and languages.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要