Multi-Vehicle Cooperative Simultaneous LiDAR SLAM and Object Tracking in Dynamic Environments

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS(2024)

引用 0|浏览29
暂无评分
摘要
Simultaneous localization and mapping (SLAM) and moving object detection and tracking (MODT) are two fundamental problems for autonomous driving systems. Multi-vehicle cooperative SLAM and cooperative object perception, which take advantage of multi-vehicle information sharing, can overcome inherent limitations of single vehicle perception, such as view occlusion. Solutions to SLAM and MODT usually rely on certain assumptions, such as the static environment assumption for SLAM and the accurate ego-vehicle pose assumption for MODT. However, it is difficult or even impossible to have these assumptions hold in complex dynamic environments. We propose a LiDAR-based coupled cooperative simultaneous SLAM and MODT (C-SLAMMODT) strategy, which not only handles the SLAM and tracking problems in dynamic environments but also overcomes limitations of single vehicle perception. The proposed C-SLAMMODT outperforms both cooperative SLAM and cooperative MODT. This method includes a cooperative SLAM module that can augment ego-vehicle pose estimation by shared information from neighbouring vehicles, a cooperative MODT module that applies a state-of-the-art adaptive feature-level fusion model to fuse multi-vehicle data, improving detection precision and overcoming the limitations of perception in occlusion situations. Furthermore, a unified factor graph optimization integrates the information obtained from ego-vehicle states, neighbor-vehicle shared data, and dynamic object states to augment pose estimation and realize object tracking. Various comparative experiments demonstrate the performance and advantages of the proposed C-SLAMMODT solution in terms of accuracy and robustness.
更多
查看译文
关键词
Multi-vehicle cooperative perception,SLAM,moving object detection and tracking (MODT),graph optimization,C-SLAMMODT
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要