Motion inspires notion: self-supervised visual-LiDAR fusion for environment depth estimation

Mobile Systems, Applications, and Services(2022)

引用 0|浏览32
暂无评分
摘要
BSTRACTEnvironment depth estimation by fusing camera and radar enables a broad spectrum of applications such as autonomous driving, environmental perception, context-aware localization and navigation. Various pioneering approaches have been proposed to achieve accurate and dense depth estimation by integrating vision and LiDAR through deep learning. However, due to the challenges of sparse sampling of in-vehicle LiDARs, high ground-truth annotation overhead, and severe dynamics in real environments, existing solutions have not yet achieved widespread deployment on commercial autonomous vehicles. In this paper, we propose LeoVR, a visual-LiDAR fusion based self-supervised approach that enables accurate environment depth estimation. LeoVR digs into the vehicle's motion information and designs two effective system frameworks based on it to (i) optimize the depth estimation results, and (ii) provide supervision signals to train a DNN. We fully implement LeoVR on a robotic testbed and commercial vehicle to conduct extensive experiments across 6 months. The results demonstrate that LeoVR achieves remarkable performance with an average depth estimation error of 0.17m, outperforming existing state-of-the-art solutions by > 43%. Besides, even cold-start in real environments by self-supervised training, LeoVR still achieves an average error of 0.21m, outperforming the related works by > 45% and comparable to those supervised training methods.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要