On Visual-Aided LiDAR-Inertial Odometry System in Challenging Subterranean Environments

semanticscholar(2021)

Cited 0|Views1
No score
Abstract
Simultaneous Localization and Mapping (SLAM) is one of the fundamental components for autonomous robotic exploration. The goal of SLAM is to create an accurate map for the environment and provide robust state estimation for planning, control, and perception tasks. However, due to the nature of different sensors, SLAM estimates are prone to drift and failure in degraded environments. For example, LiDAR-based estimation algorithms are easy to fail when the environments are geometrically degraded (elevator shafts, long corridors), and so are vision-based algorithms in texture-less environments. In the DARPA Subterranean Challenge, a fleet of robots are sent to navigate through various tunnel, urban, and cave environments, where degenerate and challenging scenes frequently show up, creating many failure cases for LiDAR-inertial state estimation system. In order to extend the robot’s exploration capability to these difficult cases, we introduce an additional visual-inertial odometry pipeline to the system. In this thesis, we demonstrate that a vision-aided LiDAR-inertial odometry system can provide more robust state estimation under challenging environments. We first discuss methods of failure and degeneracy detection for LiDAR and visual odometry. Then we expand on the depth-enhanced visual-inertial odometry pipeline, including hardware setup, and software architecture. Finally, we present a complete visual-LiDAR-inertial state estimation pipeline and show that our system can overcome extremely challenging environments like elevator shafts, long corridors, etc.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined