Decision Boundary Visualization for Counterfactual Reasoning

Jan-Tobias Sohns,Christoph Garth,Heike Leitte

COMPUTER GRAPHICS FORUM(2023)

引用 1|浏览83
暂无评分
摘要
Machine learning algorithms are widely applied to create powerful prediction models. With increasingly complex models, humans' ability to understand the decision function (that maps from a high-dimensional input space) is quickly exceeded. To explain a model's decisions, black-box methods have been proposed that provide either non-linear maps of the global topology of the decision boundary, or samples that allow approximating it locally. The former loses information about distances in input space, while the latter only provides statements about given samples, but lacks a focus on the underlying model for precise 'What-If'-reasoning. In this paper, we integrate both approaches and propose an interactive exploration method using local linear maps of the decision space. We create the maps on high-dimensional hyperplanes-2D-slices of the high-dimensional parameter space-based on statistical and personal feature mutability and guided by feature importance. We complement the proposed workflow with established model inspection techniques to provide orientation and guidance. We demonstrate our approach on real-world datasets and illustrate that it allows identification of instance-based decision boundary structures and can answer multi-dimensional 'What-If'-questions, thereby identifying counterfactual scenarios visually.
更多
查看译文
关键词
visual model evaluation, machine learning explanation, inverse multi-dimensional projection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要