Chrome Extension
WeChat Mini Program
Use on ChatGLM

I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI

IUI(2021)

Cited 44|Views16
No score
Abstract
ABSTRACTUnintended consequences of deployed AI systems fueled the call for more interpretability in AI systems. Often explainable AI (XAI) systems provide users with simplifying local explanations for individual predictions but leave it up to them to construct a global understanding of the model behavior. In this work, we examine if non-technical users of XAI fall for an illusion of explanatory depth when interpreting additive local explanations. We applied a mixed methods approach consisting of a moderated study with 40 participants and an unmoderated study with 107 crowd workers using a spreadsheet-like explanation interface based on the SHAP framework. We observed what non-technical users do to form their mental models of global AI model behavior from local explanations and how their perception of understanding decreases when it is examined.
More
Translated text
Key words
explainable AI, Shapley explanation, cognitive bias, understanding
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined