Improving Deep Neural Network Interpretation for Neuroimaging Using Multivariate Modeling

SN Computer Science(2022)

引用 0|浏览4
暂无评分
摘要
Neural networks are commonly used for the classification and segmentation of medical images. Interpretability of these models is still an area of active research. While progress has been made in visualizing areas on which convolutional neural networks (CNNs) focus most, the results are often only interpreted qualitatively. There exists a need to extend this interpretation methodology to make statistical inferences about how the network is classifying examples. The current study employs a multivariate statistical framework on activation maps from CNN classification of neuroimaging data to improve interpretability of the output. Ioflupane-123 SPECT scans from 600 participants in the Parkinson’s Progressive Markers Initiative database were classified into individuals with Parkinson’s disease and healthy controls using a 3D-adaptation of the ResNet-34 architecture. 3D-Grad-CAM was used to construct activation maps, giving the probability, at each voxel, that the network used that voxel for its final classification. These activation maps were then used in a multivariate modeling framework that corrects for multiple comparisons and spatial correlations. The multivariate model employed investigated differences in activation maps between PD and controls while controlling for age. Results showed expected regions of focus in the basal ganglia, but also showed significant differences along the nigrostriatal pathway, extending into the midbrain, an area not typically used for diagnosis. Numerous advantages stem from this framework, including greater network diagnostics, the ability to control for covariates that could be affecting network performance, and production of interpretable results that can be translated clinically to the bedside.
更多
查看译文
关键词
Neural network interpretability,Parkinson’s disease,Deep learning,Multivariate modeling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要