Multi-scale Spatial-Spectral Attention Guided Fusion Network for Pansharpening

MM '23: Proceedings of the 31st ACM International Conference on Multimedia(2023)

引用 0|浏览10
暂无评分
摘要
Pansharpening is to fuse high-resolution panchromatic (PAN) images with low-resolution multispectral (LR-MS) images to generate high-resolution multispectral (HR-MS) images. Most of the deep learning-based pansharpening methods did not consider the inconsistency of the PAN and LR-MS images and used simple concatenation to fuse the source images, which may cause spectral and spatial distortion in the fused results. To address this problem, a multi-scale spatial-spectral attention guided fusion network for pansharpening is proposed. First, the spatial features from the PAN image and spectral features from the LR-MS image are independently extracted to obtain the shallow features. Then, a spatial-spectral attention feature fusion module (SAFFM) is constructed to guide the reconstruction of spatial-spectral features by generating a guidance map to achieve the fusion of reconstructed features at different scales. In SAFFM, the guidance map is designed to ensure the spatial-spectral consistency of the reconstructed features. Finally, considering the difference between multiply scale features, a multi-level feature integration scheme is proposed to progressively achieve fusion of multi-scale features from different SAFFMs. Extensive experiments validate the effectiveness of the proposed network against other state-of-the-art (SOTA) pansharpening methods in both quantitative and qualitative assessments. The source code will be released at https://github.com/MELiMZ/ssaff.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要