TAIL: A Terrain-Aware Multi-Modal SLAM Dataset for Robot Locomotion in Deformable Granular Environments
CoRR(2024)
摘要
Terrain-aware perception holds the potential to improve the robustness and
accuracy of autonomous robot navigation in the wilds, thereby facilitating
effective off-road traversals. However, the lack of multi-modal perception
across various motion patterns hinders the solutions of Simultaneous
Localization And Mapping (SLAM), especially when confronting non-geometric
hazards in demanding landscapes. In this paper, we first propose a
Terrain-Aware multI-modaL (TAIL) dataset tailored to deformable and sandy
terrains. It incorporates various types of robotic proprioception and distinct
ground interactions for the unique challenges and benchmark of multi-sensor
fusion SLAM. The versatile sensor suite comprises stereo frame cameras,
multiple ground-pointing RGB-D cameras, a rotating 3D LiDAR, an IMU, and an RTK
device. This ensemble is hardware-synchronized, well-calibrated, and
self-contained. Utilizing both wheeled and quadrupedal locomotion, we
efficiently collect comprehensive sequences to capture rich unstructured
scenarios. It spans the spectrum of scope, terrain interactions, scene changes,
ground-level properties, and dynamic robot characteristics. We benchmark
several state-of-the-art SLAM methods against ground truth and provide
performance validations. Corresponding challenges and limitations are also
reported. All associated resources are accessible upon request at
.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要