Chrome Extension
WeChat Mini Program
Use on ChatGLM

Salient-VPR: Salient Weighted Global Descriptor for Visual Place Recognition

Ke Wang, Shengjie Luo,Tao Chen,Jianbo Lu

IEEE transactions on instrumentation and measurement(2022)

Cited 3|Views46
No score
Abstract
Visual place recognition (VPR) is a widely investigated but challenging problem, which must deal with the twin problems of changed appearance caused by varying weather conditions, illumination, and seasons, as well as dynamic objects in complex environments. In this article, we proposed salient VPR by combining image retrieval, semantic information, and saliency cues to achieve accurate estimations. We provide a novel formulation for combining the local semantic features into global descriptors. Unlike traditional global descriptors that typically summarize the whole visual content in the images or image regions, we aggregate dense local semantic descriptors based on pixel-level semantic scores to form global semantic descriptors. Then we calculate predicted saliency descriptors to learn the representation of static objects from images. These salient descriptions are learned from a dataset tailored for VPR missions. Finally, we introduce a late-fusion module, which increases the stability of the descriptor and avoids performance degradation caused by the limitations of semantic and saliency prediction. Our method outperforms traditional global descriptors in experiments, attaining state-of-the-art visual location recognition scores on a variety of challenging datasets such as Pitts30 k, Mapillary Street-level Sequences (MSLS), and Tokyo24/7.
More
Translated text
Key words
Computer vision,feature representation,image retrieval,place recognition,saliency prediction,semantic segmentation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined