Learning Cross-Scale Visual Representations for Real-Time Image Geo-Localization

IEEE ROBOTICS AND AUTOMATION LETTERS(2022)

Cited 1|Views5
No score
Abstract
Robot localization remains a challenging task in GPS denied environments. State estimation approaches based on local sensors, e.g. cameras or IMUs, are drifting-prone for long-range missions as error accumulates. In this study, we aim to address this problem by localizing image observations in a 2D multimodal geospatial map. We introduce the cross-scale(1) dataset and a methodology to produce additional data from cross-modality sources. We propose a framework that learns cross-scale visual representations without supervision. Experiments are conducted on data from two different domains, underwater and aerial. In contrast to existing studies in cross-view image geo-localization, our approach a) performs better on smaller-scale multi-modal maps; b) is more computationally efficient for real-time applications; c) can serve directly in concert with state estimation pipelines.(2)
More
Translated text
Key words
Deep learning for visual perception,marine robotics,representation learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined