Model-agnostic variable importance for predictive uncertainty: an entropy-based approach
arXiv (Cornell University)(2023)
摘要
In order to trust the predictions of a machine learning algorithm, it is
necessary to understand the factors that contribute to those predictions. In
the case of probabilistic and uncertainty-aware models, it is necessary to
understand not only the reasons for the predictions themselves, but also the
reasons for the model's level of confidence in those predictions. In this
paper, we show how existing methods in explainability can be extended to
uncertainty-aware models and how such extensions can be used to understand the
sources of uncertainty in a model's predictive distribution. In particular, by
adapting permutation feature importance, partial dependence plots, and
individual conditional expectation plots, we demonstrate that novel insights
into model behaviour may be obtained and that these methods can be used to
measure the impact of features on both the entropy of the predictive
distribution and the log-likelihood of the ground truth labels under that
distribution. With experiments using both synthetic and real-world data, we
demonstrate the utility of these approaches to understand both the sources of
uncertainty and their impact on model performance.
更多查看译文
关键词
Model Interpretability,Interpretable Models,Machine Learning Interpretability,Feature Importance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要