NALO-VOM: Navigation-Oriented LiDAR-Guided Monocular Visual Odometry and Mapping for Unmanned Ground Vehicles.

Ziqi Hu,Jing Yuan , Yuanxi Gao, Boran Wang,Xuebo Zhang

IEEE Trans. Intell. Veh.(2024)

引用 0|浏览0
暂无评分
摘要
Monocular visual odometry (VO) is a fundamental technique for unmanned ground vehicle (UGV) navigation. However, traditional monocular VO methods always suffer from sparse environment maps which cannot be directly used for navigation because of the lack of structural information. In this article, we propose a navigation-oriented LiDAR-guided monocular visual odometry and mapping (NALO-VOM) to obtain scale-consistent camera poses and a semi-dense environment map which is more suitable for navigation of UGVs. The structure representation ability of the 3D LiDAR point cloud is learned by a major-plane prediction network and then transferred into the monocular VO system in NALO-VOM. As a result, NALO-VOM can construct a more dense and high-quality map using only a monocular camera. To be specific, the major-plane prediction network is trained offline using 3D LiDAR geometric information, which predicts major-plane mask (MP-Mask) for each frame of the visual image during the localization. Then, MP-Mask is used for scale optimization and semi-dense map building. Experiments are performed on the public dataset and self-collected sequences. The results show the competitive performance on the localization accuracy and mapping quality compared with other visual odometry methods.
更多
查看译文
关键词
Navigation-oriented visual odometry,semi-dense map building,unmanned ground vehicles
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要