Unifying Scene Representation and Hand-Eye Calibration with 3D Foundation Models
CoRR(2024)
Abstract
Representing the environment is a central challenge in robotics, and is
essential for effective decision-making. Traditionally, before capturing images
with a manipulator-mounted camera, users need to calibrate the camera using a
specific external marker, such as a checkerboard or AprilTag. However, recent
advances in computer vision have led to the development of 3D foundation
models. These are large, pre-trained neural networks that can establish fast
and accurate multi-view correspondences with very few images, even in the
absence of rich visual features. This paper advocates for the integration of 3D
foundation models into scene representation approaches for robotic systems
equipped with manipulator-mounted RGB cameras. Specifically, we propose the
Joint Calibration and Representation (JCR) method. JCR uses RGB images,
captured by a manipulator-mounted camera, to simultaneously construct an
environmental representation and calibrate the camera relative to the robot's
end-effector, in the absence of specific calibration markers. The resulting 3D
environment representation is aligned with the robot's coordinate frame and
maintains physically accurate scales. We demonstrate that JCR can build
effective scene representations using a low-cost RGB camera attached to a
manipulator, without prior calibration.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined