Depth-Induced Gap-Reducing Network for RGB-D Salient Object Detection: An Interaction, Guidance and Refinement Approach

Xiaolong Cheng, Xuan Zheng,Jialun Pei,He Tang,Zehua Lyu,Chuanbo Chen

IEEE Transactions on Multimedia(2023)

引用 12|浏览29
暂无评分
摘要
Depth provides complementary information for salient object detection (SOD). However, the performance of RGB-D SOD methods is usually hindered by low quality depth map, semantic gap cross-modality and intrinsic gap between multi-level features. Although recent RGB-D SOD methods have been embedded into depth quality assessment, these methods do not consider the inconsistency of the depth format across datasets. In this paper, we propose an interpretable and effective mechanism called interference degree (ID) to assess depth quality and reweight the contribution of single-modality features without extra annotation. Then, a cross-modality interaction block (CMIB) is designed to reduce the semantic gap between RGB and depth features with the help of ID mechanism, and a mutually guided cross-level fusion (MGCF) module is designed to reduce the intrinsic gap among multi-level features. Finally, a refinement branch is proposed to enhance the salient regions and suppress the non-salient regions of fused features. Extensive experiments on six benchmark datasets show that the proposed depth-induced gap-reducing network (DIGR-Net) outperforms 20 recent state-of-the-art methods.
更多
查看译文
关键词
salient,depth-induced,gap-reducing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要