Depth super-resolution from explicit and implicit high-frequency features

Computer Vision and Image Understanding(2023)

引用 0|浏览2
暂无评分
摘要
Guided depth super-resolution aims at using a low-resolution depth map and an associated high-resolution RGB image to recover a high-resolution depth map. However, restoring precise and sharp edges near depth discontinuities and fine structures is still challenging for state-of-the-art methods. To alleviate this issue, we propose a novel multi-stage depth super-resolution network, which progressively reconstructs HR depth maps from explicit and implicit high-frequency information. We introduce an efficient transformer to obtain explicit high-frequency information. The shape bias and global context of the transformer allow our model to focus on high-frequency details between objects, i.e., depth discontinuities, rather than texture within objects. Furthermore, we project the input color images into the frequency domain for additional implicit high-frequency cues extraction. Finally, to incorporate the structural details, we develop a fusion strategy that combines depth features and high-frequency information in the multi-stage-scale framework. Exhaustive experiments on the main benchmarks show that our approach establishes a new state-of-the-art. Code will be publicly available at https://github.com/wudiqx106/DSR-EI . • DSR-EI employs an efficient transformer for explicit, HF feature extraction. • We propose LCF that can obtain accurate, implicit HF information. • We propose AFFM to counter the information loss issue. • DSR-EI outperforms other SoTA methods.
更多
查看译文
关键词
Guided depth super-resolution,CNN,Transformer,Multi-scale,High-frequency information
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要