Boundary-Aware Uncertainty for Feature Attribution Explainers
CoRR(2022)
摘要
Post-hoc explanation methods have become a critical tool for understanding
black-box classifiers in high-stakes applications. However, high-performing
classifiers are often highly nonlinear and can exhibit complex behavior around
the decision boundary, leading to brittle or misleading local explanations.
Therefore there is an impending need to quantify the uncertainty of such
explanation methods in order to understand when explanations are trustworthy.
In this work we propose the Gaussian Process Explanation UnCertainty (GPEC)
framework, which generates a unified uncertainty estimate combining decision
boundary-aware uncertainty with explanation function approximation uncertainty.
We introduce a novel geodesic-based kernel, which captures the complexity of
the target black-box decision boundary. We show theoretically that the proposed
kernel similarity increases with decision boundary complexity. The proposed
framework is highly flexible; it can be used with any black-box classifier and
feature attribution method. Empirical results on multiple tabular and image
datasets show that the GPEC uncertainty estimate improves understanding of
explanations as compared to existing methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要