Improving visual odometry pipeline with feedback from forward and backward motion estimates

Mach. Vis. Appl.(2023)

Cited 1|Views2
No score
Abstract
Estimating motion from visual cameras has become a promising art in the area of autonomous navigation and constant efforts are being made toward improving the accuracy of these estimates. In this paper, an improvement in the visual odometry algorithm is proposed that takes cues from both the forward and backward motion estimates. An error is formulated based on the consistency which measures the difference between the forward and backward motion. This error is used in a feedback mechanism to improve the triangulated 3D point estimates, thereby improving the pose estimate. Additionally, a novel means to incorporate information from multiple stereo camera setups has been devised to improve the pose estimate. The proposed scheme of joint forward–backward VO with multiple cameras and feedback mechanism (JFBVO–FM) is validated on two publicly available datasets having different environmental conditions and camera motion, that is, KITTI and EuRoC Micro Aerial Vehicle (MAV) datasets. The results are analyzed both qualitatively and quantitatively, and the proposed scheme is found to perform better as compared to the state-of-the-art methods in most of the sequences.
More
Translated text
Key words
Visual odometry,Forward-Backward error,Multi-camera integration,Autonomous navigation,Localization
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined