A Weakly Supervised Gradient Attribution Constraint for Interpretable Classification and Anomaly Detection.

IEEE transactions on medical imaging(2023)

引用 0|浏览11
暂无评分
摘要
The lack of interpretability of deep learning reduces understanding of what happens when a network does not work as expected and hinders its use in critical fields like medicine, which require transparency of decisions. For example, a healthy vs pathological classification model should rely on radiological signs and not on some training dataset biases. Several post-hoc models have been proposed to explain the decision of a trained network. However, they are very seldom used to enforce interpretability during training and none in accordance with the classification. In this paper, we propose a new weakly supervised method for both interpretable healthy vs pathological classification and anomaly detection. A new loss function is added to a standard classification model to constrain each voxel of healthy images to drive the network decision towards the healthy class according to gradient-based attributions. This constraint reveals pathological structures for patient images, allowing their unsupervised segmentation. Moreover, we advocate both theoretically and experimentally, that constrained training with the simple Gradient attribution is similar to constraints with the heavier Expected Gradient, consequently reducing the computational cost. We also propose a combination of attributions during the constrained training making the model robust to the attribution choice at inference. Our proposition was evaluated on two brain pathologies: tumors and multiple sclerosis. This new constraint provides a more relevant classification, with a more pathology-driven decision. For anomaly detection, the proposed method outperforms state-of-the-art especially on difficult multiple sclerosis lesions segmentation task with a 15 points Dice improvement.
更多
查看译文
关键词
interpretable classification,gradient,detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要