Location Graphs For Visual Place Recognition

2015 IEEE International Conference on Robotics and Automation (ICRA)(2015)

引用 41|浏览47
暂无评分
摘要
With the growing demand for deployment of robots in real scenarios, robustness in the perception capabilities for navigation lies at the forefront of research interest, as this forms the backbone of robotic autonomy. Existing place recognition approaches traditionally follow the feature-based bag-of-words paradigm in order to cut down on the richness of information in images. As structural information is typically ignored, such methods suffer from perceptual aliasing and reduced recall, due to the ambiguity of observations. In a bid to boost the robustness of appearance-based place recognition, we consider the world as a continuous constellation of visual words, while keeping track of their covisibility in a graph structure. Locations are queried based on their appearance, and modelled by their corresponding cluster of landmarks from the global covisibility graph, which retains important relational information about landmarks. Complexity is reduced by comparing locations by their graphs of visual words in a simplified manner. Test results show increased recall performance and robustness to noisy observations, compared to state-of-the-art methods.
更多
查看译文
关键词
location graph structure,visual plane recognition,robot deployment,navigation,robotic autonomy,place recognition approach,feature-based bag-of-words paradigm,perceptual aliasing,appearance-based place recognition,continuous visual words constellation,landmark cluster,global covisibility graph
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要