Improving Pre-Training and Fine-Tuning for Few-Shot SAR Automatic Target Recognition

Remote Sensing(2023)

引用 1|浏览8
暂无评分
摘要
SAR-ATR (synthetic aperture radar-automatic target recognition) is a hot topic in remote sensing. This work suggests a few-shot target recognition approach (FTL) based on the concept of transfer learning to accomplish accurate target recognition of SAR images in a few-shot scenario since the classic SAR ATR method has significant data reliance. At the same time, the strategy introduces a model distillation method to improve the model’s performance further. This method is composed of three parts. First, the data engine, which uses the style conversion model and optical image data to generate image data similar to SAR style and realize cross-domain conversion, can effectively solve the problem of insufficient training data of the SAR image classification model. Second is model training, which uses SAR image data sets to pre-train the model. Here, we introduce the deep Brownian distance covariance (Deep BDC) pooling layer to optimize the image feature representation so that the model can learn the image representation by measuring the difference between the joint feature function of the embedded feature and the edge product. Third, model fine-tuning, which freezes the model structure, except the classifier, and fine-tunes it by using a small amount of novel data. The knowledge distillation approach is also introduced simultaneously to train the model repeatedly, sharpen the knowledge, and enhance model performance. According to experimental results on the MSTAR benchmark dataset, the proposed method is demonstrably better than the SOTA method in the few-shot SAR ATR issue. The recognition accuracy is about 80% in the case of 10-way 10-shot.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要