Visualizing surrogate decision trees of convolutional neural networks

Journal of Visualization(2019)

引用 18|浏览166
暂无评分
摘要
Interpreting the decision-making of black boxes in machine learning becomes urgent nowadays due to their lack of transparency. One effective way to interpret these models is to transform them into interpretable surrogate models such as decision trees and rule lists. Compared with other methods that open the black boxes, rule extraction is a universal method which can theoretically extend to any black boxes. However, in practice, it is not appropriate for deep learning models such as convolutional neural networks (CNNs), since the extracted rules or decision trees are too large to interpret and the rules are not at the semantic level. These two drawbacks limit the usability of rule extraction for deep learning models. In this paper, we adopt a new strategy to solve the problem. We first decompose a CNN into a feature extractor and a classifier. Then extract the decision tree only from the classifier. Then, we leverage lots of segmented labeled images to learn the concepts of each feature. This method can extract human-readable decision trees from CNNs. Finally, we build CNN2DT, a visual analysis system to enable users to explore the surrogate decision trees. Use cases show that CNN2DT provides global and local interpretations of the CNN decision process. Besides, users can easily find the misclassification reasons for single images and the discriminating capacity of different models. A user study has demonstrated the effectiveness of CNN2DT on AlexNet and VGG16 for image classification. GraphicAbstract
更多
查看译文
关键词
Rule extraction, Surrogate decision tree, Convolutional neural networks, Deep learning, Model interpretation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要