Feature Relevance Evaluation using Grad-CAM, LIME and SHAP for Deep Learning SAR Data Classification

2022 23rd International Radar Symposium (IRS)(2022)

Cited 5|Views3
No score
Abstract
For predictive analysis and automatic classification, Deep Neural Networks (DNNs) are investigated and visualized. All the DNNs used for Automatic Target Recognition (ATR) have inbuilt feature extraction and classification abilities, but the inner working gets more opaque rendering them a black box as the networks get deeper and more complex. The main goal of this paper is to get a glimpse of what the network perceives in order to classify Moving and Stationary Target Acquisition and Recognition (MSTAR) targets. However, past works have shown that classification of targets was performed solely based on clutter within the MSTAR data. Here we show that the DNN trained on the MSTAR dataset classifies only based on target information and the clutter plays no role in it. To demonstrate this, heatmaps are generated using the Gradient-weighted Class Activation Mapping (Grad-CAM) method to highlight the areas of attention in each input Synthetic Aperture Radar (SAR) image. To further probe into the interpretability of classifiers, reliable post hoc explanation techniques are used such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) to approximate the behaviour of a black box by extracting relationships between feature value and prediction.
More
Translated text
Key words
Automatic Target Recognition,Deep Neural Networks,Convolutional Neural Network,Moving and Stationary Target Acquisition and Recognition,Synthetic Aperture Radar,Local Interpretable Model-Agnostic Explanations,SHapley Additive exPlanations,Gradient-weighted Class Activation Mapping
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined