Chrome Extension
WeChat Mini Program
Use on ChatGLM

WaRoNav: Warehouse Robot Navigation Based on Multi-view Visual-Inertial Fusion

PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT III(2024)

Cited 0|Views5
No score
Abstract
An accurate and globally consistent navigation system is crucial for the efficient functioning of warehouse robots. Among various robot navigation techniques, the tightly-coupled visual-inertial fusion stands out as one of the most promising approaches, owing to its complementary sensing and impressive performance in terms of response time and accuracy. However, the current state-of-the-art visual-inertial fusion methods suffer from limitations such as long-term drifts and loss of absolute reference. To address these issues, this paper proposes a novel globally consistent multi-view visual-inertial fusion framework, called WaRoNav, for warehouse robot navigation. Specifically, the proposed method jointly exploits a downward-looking QR-vision sensor and a forward-looking visual-inertial sensor to estimate the robot poses and velocities in realtime. The downward camera provides absolute robot poses with reference to the global workshop frame. Furthermore, the long-term visual-inertial drifts, inertial biases, and velocities are periodically compensated at spatial intervals of QR codes by minimizing visual-inertial residuals with rigid constraints of absolute poses estimated from downward visual measurements. The effectiveness of the proposed method is evaluated on a developed warehouse robot navigation platform. The experimental results show competitive accuracy against state-of-the-art approaches with the maximal position error of 0.05m and maximal attitude error of 2 degrees, irrespective of the trajectory lengths.
More
Translated text
Key words
Warehouse Robot,Navigation,Multiple View,Visual-Inertial Fusion
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined