Robustness Analysis of Deep Learning-Based Lung Cancer Classification Using Explainable Methods.

IEEE Access(2022)

引用 1|浏览5
暂无评分
摘要
Deep Learning (DL) based classification algorithms have been shown to achieve top results in clinical diagnosis, namely with lung cancer datasets. However, the complexity and opaqueness of the models together with the still scant training datasets call for the development of explainable modeling methods enabling the interpretation of the results. To this end, in this paper we propose a novel interpretability approach and demonstrate how it can be used on a malignancy lung cancer DL classifier to assess its stability and congruence even when fed a low amount of image samples. Additionally, by disclosing the regions of the medical images most relevant to the resulting classification the approach provides important insights to the correspondent clinical meaning apprehended by the algorithm. Explanations of the results provided by ten different models against the same test sample are compared. These attest the stability of the approach and the algorithm focus on the same image regions.
更多
查看译文
关键词
Predictive models,Solid modeling,Biomedical imaging,Data models,Computational modeling,Lung cancer,Deep learning,CT scan,congruence,deep learning,diagnostic imaging,interpretability,malignancy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要