Explain Thyself Bully: Sentiment Aided Cyberbullying Detection with Explanation

ICDAR (3)(2024)

引用 0|浏览6
暂无评分
摘要
Cyberbullying has become a big issue with the popularity of different social media networks and online communication apps. While plenty of research is going on to develop better models for cyberbullying detection in monolingual language, there is very little research on the code-mixed languages and explainability aspect of cyberbullying. Recent laws like "right to explanations" of General Data Protection Regulation, have spurred research in developing interpretable models rather than focusing on performance. Motivated by this we develop the first interpretable multi-task model called mExCB for automatic cyberbullying detection from code-mixed languages which can simultaneously solve several tasks, cyberbullying detection, explanation/rationale identification, target group detection and sentiment analysis. We have introduced BullyExplain, the first benchmark dataset for explainable cyberbullying detection in code-mixed language. Each post in BullyExplain dataset is annotated with four labels, i.e., bully label, sentiment label, target and rationales (explainability), i.e., which phrases are being responsible for annotating the post as a bully. The proposed multitask framework (mExCB) based on CNN and GRU with word and sub-sentence (SS) level attention is able to outperform several baselines and state of the art models when applied on BullyExplain dataset.
更多
查看译文
关键词
sentiment aided cyberbullying detection,thyself bully”
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要