Interpretation of Lesional Detection via Counterfactual Generation.

ICIP(2021)

引用 4|浏览7
暂无评分
摘要
To interpret the decision of Deep Neural Networks (DNNs), explainable artificial intelligence research has been widely investigated. Especially, visualizing the attribution maps is known as one of the efficient ways to provide explanations for the trained networks. Applying existing visualization methods on medical images has significant issues in that the medical images commonly have inherent imbalanced data poses and scarcity. To tackle such issues and provide more accurate explanations in medical images, in this paper, we propose a new explainable framework, Counterfactual Generative Network (CGN). We embed counterfactual lesion prediction of DNNs to our explainable framework as prior conditions and guide to generate various counterfactual lesional images from normal input sources, or vice versa. By doing so, CGN can represent detailed attribution maps and generate corresponding normal images from leisonal inputs. Extensive experiments are conducted on the two chest X-ray datasets to verify the effectiveness of our method.
更多
查看译文
关键词
Deep learning,Visualization,Image processing,Conferences,Data visualization,Medical diagnosis,Lesions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要