Grounding Image Matching in 3D with MASt3R
arxiv(2024)
Abstract
Image Matching is a core component of all best-performing algorithms and
pipelines in 3D vision. Yet despite matching being fundamentally a 3D problem,
intrinsically linked to camera pose and scene geometry, it is typically treated
as a 2D problem. This makes sense as the goal of matching is to establish
correspondences between 2D pixel fields, but also seems like a potentially
hazardous choice. In this work, we take a different stance and propose to cast
matching as a 3D task with DUSt3R, a recent and powerful 3D reconstruction
framework based on Transformers. Based on pointmaps regression, this method
displayed impressive robustness in matching views with extreme viewpoint
changes, yet with limited accuracy. We aim here to improve the matching
capabilities of such an approach while preserving its robustness. We thus
propose to augment the DUSt3R network with a new head that outputs dense local
features, trained with an additional matching loss. We further address the
issue of quadratic complexity of dense matching, which becomes prohibitively
slow for downstream applications if not carefully treated. We introduce a fast
reciprocal matching scheme that not only accelerates matching by orders of
magnitude, but also comes with theoretical guarantees and, lastly, yields
improved results. Extensive experiments show that our approach, coined MASt3R,
significantly outperforms the state of the art on multiple matching tasks. In
particular, it beats the best published methods by 30
in VCRE AUC on the extremely challenging Map-free localization dataset.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined