Global Fusion of Relative Motions for Robust, Accurate and Scalable Structure from Motion

Computer Vision(2013)

Cited 385|Views0
No score
Abstract
Multi-view structure from motion (SfM) estimates the position and orientation of pictures in a common 3D coordinate frame. When views are treated incrementally, this external calibration can be subject to drift, contrary to global methods that distribute residual errors evenly. We propose a new global calibration approach based on the fusion of relative motions between image pairs. We improve an existing method for robustly computing global rotations. We present an efficient a contrario trifocal tensor estimation method, from which stable and precise translation directions can be extracted. We also define an efficient translation registration method that recovers accurate camera positions. These components are combined into an original SfM pipeline. Our experiments show that, on most datasets, it outperforms in accuracy other existing incremental and global pipelines. It also achieves strikingly good running times: it is about 20 times faster than the other global method we could compare to, and as fast as the best incremental method. More importantly, it features better scalability properties.
More
Translated text
Key words
calibration,cameras,feature extraction,image fusion,image registration,motion estimation,3D coordinate frame,SfM pipeline,camera positions,contrario trifocal tensor estimation method,external calibration,global calibration approach,global fusion,multiview structure from motion,picture orientation estimation,position estimation,relative motions,residual errors,scalable structure from motion,translation registration method,Calibration,Structure-from-Motion,robust estimation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined