DiffPoint: Single and Multi-view Point Cloud Reconstruction with ViT Based Diffusion Model
CoRR(2024)
Abstract
As the task of 2D-to-3D reconstruction has gained significant attention in
various real-world scenarios, it becomes crucial to be able to generate
high-quality point clouds. Despite the recent success of deep learning models
in generating point clouds, there are still challenges in producing
high-fidelity results due to the disparities between images and point clouds.
While vision transformers (ViT) and diffusion models have shown promise in
various vision tasks, their benefits for reconstructing point clouds from
images have not been demonstrated yet. In this paper, we first propose a neat
and powerful architecture called DiffPoint that combines ViT and diffusion
models for the task of point cloud reconstruction. At each diffusion step, we
divide the noisy point clouds into irregular patches. Then, using a standard
ViT backbone that treats all inputs as tokens (including time information,
image embeddings, and noisy patches), we train our model to predict target
points based on input images. We evaluate DiffPoint on both single-view and
multi-view reconstruction tasks and achieve state-of-the-art results.
Additionally, we introduce a unified and flexible feature fusion module for
aggregating image features from single or multiple input images. Furthermore,
our work demonstrates the feasibility of applying unified architectures across
languages and images to improve 3D reconstruction tasks.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined