FSNet: Dual Interpretable Graph Convolutional Network for Alzheimer's Disease Analysis

IEEE Transactions on Emerging Topics in Computational Intelligence(2023)

引用 7|浏览45
暂无评分
摘要
Graph Convolutional Networks (GCNs) are widely used in medical images diagnostic research, because they can automatically learn powerful and robust feature representations. However, their performance might be significantly deteriorated by trivial or corrupted medical features and samples. Moreover, existing methods cannot simultaneously interpret the significant features and samples. To overcome these limitations, in this paper, we propose a novel dual interpretable graph convolutional network, namely FSNet, to simultaneously select significant features and samples, so as to boost model performance for medical diagnosis and interpretation. Specifically, the proposed network consists of three modules, two of which leverage one simple yet effective sparse mechanism to obtain feature and sample weight matrices for interpreting features and samples, respectively, and the third one is utilized for medical diagnosis. Extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) datasets demonstrate the superior classification performance and interpretability over the recent state-of-the-art methods.
更多
查看译文
关键词
Training,Sparse matrices,Feature extraction,Magnetic resonance imaging,Neuroimaging,Medical diagnostic imaging,Medical diagnosis,Alzheimer's disease diagnosis research,feature interpretability,graph convolutional network,sample interpretability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要