Explaining NonLinear Classification Decisions with Deep Taylor Decomposition

Pattern Recognition(2017)

引用 1545|浏览269
暂无评分
摘要
Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems such as image recognition. Although these methods perform impressively well, they have a significant disadvantage, the lack of transparency, limiting the interpretability of the solution and thus the scope of application in practice. Especially DNNs act as black boxes due to their multilayer nonlinear structure. In this paper we introduce a novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements. Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures. Our method called deep Taylor decomposition efficiently utilizes the structure of the network by backpropagating the explanations from the output to the input layer. We evaluate the proposed method empirically on the MNIST and ILSVRC data sets. HighlightsA novel method to explain nonlinear classification decisions in terms of input variables is introduced.The method is based on Taylor expansions and decomposes the output of a deep neural network in terms of input variables.The resulting deep Taylor decomposition can be applied directly to existing neural networks without retraining.The method is tested on two large-scale neural networks for image classification: BVLC CaffeNet and GoogleNet.
更多
查看译文
关键词
Deep neural networks,Heatmapping,Taylor decomposition,Relevance propagation,Image recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要