A Novel Lidar-Assisted Monocular Visual SLAM Framework for Mobile Robots in Outdoor Environments

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT(2022)

引用 20|浏览19
暂无评分
摘要
In this article, a novel 3-D lidar-assisted monocular visual simultaneous localization and mapping (LAMV-SLAM) framework is proposed for mobile robots in outdoor environments. LAMV-SLAM can run in real time without a GPU and build a dense map with real scale. An online photometric calibration thread is integrated into LAMV-SLAM to eliminate the photometric disturbances in images. The tracking thread combines the lidar and vision data to estimate and refine the frame-to-frame transformation. In this thread, the depth fusion algorithm is proposed to provide accurate depth values for the extracted visual features by combining the lidar points, and a novel two-stage optimization method is proposed to utilize the fused lidar-vision data to estimate the camera transformation with the real scale. A parallel mapping thread generates new map points based on depth filter and lidar-vision data fusion. A loop closing thread further reduces the accumulative errors of the system. To verify the accuracy and efficiency of the system, we evaluated the proposed pipeline on the KITTI odometry benchmark, and our LAMV-SLAM achieves a 0.81% of relative position drift while running at over 3x real-time speed. To verify the robustness of the system in challenging environments, experiments were carried out on the North Campus Long-Term (NCLT) and NuScenes datasets. Moreover, real-world experiments were applied to our mobile robot platform to show the practicability and validity of the proposed approach.
更多
查看译文
关键词
Depth fusion,lidar-assisted monocular visual simultaneous localization and mapping (LAMV-SLAM),mobile robots,online photometric calibration
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要