Incremental XAI: Memorable Understanding of AI with Incremental Explanations
arxiv(2024)
摘要
Many explainable AI (XAI) techniques strive for interpretability by providing
concise salient information, such as sparse linear factors. However, users
either only see inaccurate global explanations, or highly-varying local
explanations. We propose to provide more detailed explanations by leveraging
the human cognitive capacity to accumulate knowledge by incrementally receiving
more details. Focusing on linear factor explanations (factors × values =
outcome), we introduce Incremental XAI to automatically partition explanations
for general and atypical instances by providing Base + Incremental factors to
help users read and remember more faithful explanations. Memorability is
improved by reusing base factors and reducing the number of factors shown in
atypical cases. In modeling, formative, and summative user studies, we
evaluated the faithfulness, memorability and understandability of Incremental
XAI against baseline explanation methods. This work contributes towards more
usable explanation that users can better ingrain to facilitate intuitive
engagement with AI.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要