Enhanced multihead self-attention block network for remote sensing image scene classification

JOURNAL OF APPLIED REMOTE SENSING(2023)

引用 0|浏览16
暂无评分
摘要
Remote sensing image scene classification has been widely researched with the aim of assigning semantics labels to the land cover. Although convolutional neural networks (CNN), such as VggNet and ResNet, have achieved good performance, the complex background and redundant information of remote sensing images restrict the improvement of final accuracy. We propose an enhanced multihead self-attention block network, which effectively reduces the adverse impact of background and emphasize the main information. In this model, due to the possible redundancy of high-level information of CNN, we only replace the final three bottleneck blocks of ResNet50 with the enhanced multihead self-attention layer to focus on the salient region of each image more effectively. Our enhanced multihead self-attention layer provides the following improvements over the classical module. First, we construct a triple-way convolution to deal with the arbitrary directionality of remote sensing images and get more stable attention information. Then, the improved relative position encodings are used to consider the relative distance between different location features. Finally, we use depthwise convolution and InstanceNorm operation to restore the diversity ability of multiheads. The contrast and ablation experiments carried out on three public datasets show our approach improves upon the baseline significantly and achieves remarkable performance compared with some state-of-the-art methods.
更多
查看译文
关键词
remote sensing scene classification,convolutional neural networks,self-attention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要