Can Interpretability Layouts Influence Human Perception of Offensive Sentences?
arxiv(2024)
摘要
This paper conducts a user study to assess whether three machine learning
(ML) interpretability layouts can influence participants' views when evaluating
sentences containing hate speech, focusing on the "Misogyny" and "Racism"
classes. Given the existence of divergent conclusions in the literature, we
provide empirical evidence on using ML interpretability in online communities
through statistical and qualitative analyses of questionnaire responses. The
Generalized Additive Model estimates participants' ratings, incorporating
within-subject and between-subject designs. While our statistical analysis
indicates that none of the interpretability layouts significantly influences
participants' views, our qualitative analysis demonstrates the advantages of ML
interpretability: 1) triggering participants to provide corrective feedback in
case of discrepancies between their views and the model, and 2) providing
insights to evaluate a model's behavior beyond traditional performance metrics.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要