Visualising deep neural network decisions

International Conference on Learning Representations (ICLR2017). CBLS(2017)

引用 1|浏览8
暂无评分
摘要
Page 1. Visualization of how different patch sizes influence the result (ie, how much information is removed at once). EXPERIMENTS VISUALIZING DEEP NEURAL NETWORK DECISIONS Luisa M. Zintgraf 1, Taco S. Cohen 1, Tameem Adel1, Max Welling 1,2 1 University of Amsterdam, 2 Canadian Institute for Advanced Research We want to understand DNNs by explaining individual decisions. This can • accelerate their adoption in industry, government & healthcare, • lead to new insight and theories in poorly understood domains, • improve network architectures. [1] Robnik-Sikonja, Marko, and Igor Kononenko. “Explaining classifications for individual instances.” Knowledge and Data Engineering, IEEE Transactions on 20.5 (2008): 589-600. [2] Simonyan, Karen, Andrea Vedaldi, and Andrew Zisserman. “Deep inside convolutional networks: Visualising image classification models and saliency maps.” arXiv preprint …
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要