Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance.

arXiv: Machine Learning(2016)

引用 23|浏览183
暂无评分
摘要
At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a modelu0027s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the modelu0027s behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a modelu0027s behavior. In this work, we propose anchor-LIME (aLIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear. We compare aLIME to linear LIME with simulated experiments, and demonstrate the flexibility of aLIME with qualitative examples from a variety of domains and tasks.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要