Scene-aware refinement network for unsupervised monocular depth estimation in ultra-low altitude oblique photography of UAV

ISPRS Journal of Photogrammetry and Remote Sensing(2023)

引用 0|浏览5
暂无评分
摘要
Using depth estimation joint target detection networks to locate targets in the UAV field of view is a novel application in the depth estimation research field. The presence of more depth variations and low-texture regions in the ultra-low altitude oblique photographic images make them trickier to train for an excellent depth estimation network compared to autonomous driving scenarios. This presents a challenge in achieving optimal training. This study investigates the problem of unsupervised monocular depth estimation for ultra-low altitude oblique photography images. It aims to make subsequent advanced vision tasks better benefit from excellent depth estimation results in terms of overcoming complex scenes. The lack of effective back-projection directionality in training using adjacent frames is attributed to the extensive low-textured areas contained in the training data for complex ultra-low altitude oblique photography. We propose a self-supervised scene-aware refinement learning architecture from the perspective of enhancing feature perception to deal with such problems. The architecture consists of a multi-resolution feature fusion depth network and a perceptual refinement network (PRNet), together with a pose network to enhance regional differences in complex environments from a refined feature context perspective to obtain higher quality depth maps. We rethink the problem of depth information recovery and design the edge information aggregation (EIA) module, which is configured in the decoder section to refine the local region depth detail representation. We design several loss terms to constrain the training of the network in order to improve the quality of depth estimation. Our method is compared with six state-of-the-art self-supervised monocular depth estimation methods on three datasets (UAVid 2020, WildUAV, UAV ula). The experimental results demonstrate that our model achieves the best performance in most scenarios. The code and the private dataset (UAV ula) can be publicly available at https://github.com/takisu0916/MRFEDepth.
更多
查看译文
关键词
Monocular depth estimation,Ultra-low altitude oblique photography,Scene-aware refinement,Unmanned aerial vehicles (UAVs),Self-supervised learning,Target positioning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要