NOVA"/>

Unraveling ML Models of Emotion With NOVA: Multi-Level Explainable AI for Non-Experts

IEEE Transactions on Affective Computing(2022)

引用 16|浏览46
暂无评分
摘要
In this article, we introduce a next-generation annotation tool called NOVA for emotional behaviour analysis, which implements a workflow that interactively incorporates the ‘human in the loop’. A main aspect of NOVA is the possibility of applying semi-supervised active learning where Machine Learning techniques are used already during the annotation process by giving the possibility to pre-label data automatically. Furthermore, NOVA implements recent eXplainable AI (XAI) techniques to provide users with both, a confidence value of the automatically predicted annotations, as well as visual explanations. We investigate how such techniques can assist non-experts in terms of trust, perceived self-efficacy, cognitive workload as well as creating correct mental models about the system by conducting a user study with 53 participants. The results show that NOVA can easily be used by non-experts and lead to a high computer self-efficacy. Furthermore, the results indicate that XAI visualisations help users to create more correct mental models about the machine learning system compared to the baseline condition. Nevertheless, we suggest that explanations in the field of AI have to be more focused on user-needs as well as on the classification task and the model they want to explain.
更多
查看译文
关键词
Tools and methods of annotation for provision of emotional corpora,interactive machine learning,explainable AI,trust,mental model,computer self-efficacy,human-computer interaction,annotation tools
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要