谷歌浏览器插件
订阅小程序
在清言上使用

Attentive Cross-Modal Fusion Network for RGB-D Saliency Detection

IEEE Transactions on Multimedia(2021)

引用 26|浏览48
暂无评分
摘要
In this paper, an attentive cross-modal fusion (ACMF) network is proposed for RGB-D salient object detection. The proposed method selectively fuses features in a cross-modal manner and uses a fusion refinement module to fuse output features from different resolutions. Our attentive cross-modal fusion network is built based on residual attention. In each level of ResNet output, both the RGB and depth features are turned into an identity map and a weighted attention map. The identity map is reweighted by the attention map of the paired modality. Moreover, the lower level features with higher resolution are adopted to refine the boundary of detected targets. The entire architecture can be trained end-to-end. The proposed ACMF is compared with state-of-the-art methods on eight recent datasets. The results demonstrate that our model can achieve advanced performance on RGB-D salient object detection.
更多
查看译文
关键词
Cross-modal attention,residual attention,fusion refinement network,RGB-D salient object detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要