Continuous Self-Localization on Aerial Images Using Visual and Lidar Sensors.

IEEE/RJS International Conference on Intelligent RObots and Systems (IROS)(2022)

引用 9|浏览6
暂无评分
摘要
This paper proposes a novel method for geo-tracking, i.e. continuous metric self-localization in outdoor environments by registering a vehicle's sensor information with aerial imagery of an unseen target region. Geo-tracking methods offer the potential to supplant noisy signals from global navigation satellite systems (GNSS) and expensive and hard to maintain prior maps that are typically used for this purpose. The proposed geo-tracking method aligns data from on-board cameras and lidar sensors with geo-registered orthophotos to continuously localize a vehicle. We train a model in a metric learning setting to extract visual features from ground and aerial images. The ground features are projected into a top-down perspective via the lidar points and are matched with the aerial features to determine the relative pose between vehicle and orthophoto. Our method is the first to utilize on-board cameras in an end-to-end differentiable model for metric self-localization on unseen orthophotos. It exhibits strong generalization, is robust to changes in the environment and requires only geo-poses as ground truth. We evaluate our approach on the KITTI-360 dataset and achieve a mean absolute position error (APE) of 0.94m. We further compare with previous approaches on the KITTI odometry dataset and achieve state-of-the-art results on the geo-tracking task.
更多
查看译文
关键词
aerial features,aerial imagery,continuous metric self-localization,geo-registered orthophotos,geo-tracking methods,global navigation satellite systems,GNSS,ground features,lidar points,lidar sensor,metric learning,on-board cameras,orthophotos,outdoor environments,sensor information,visual feature extraction,visual sensor
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要