Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors
CoRR(2023)
摘要
Most 3D generation research focuses on up-projecting 2D foundation models
into the 3D space, either by minimizing 2D Score Distillation Sampling (SDS)
loss or fine-tuning on multi-view datasets. Without explicit 3D priors, these
methods often lead to geometric anomalies and multi-view inconsistency.
Recently, researchers have attempted to improve the genuineness of 3D objects
by directly training on 3D datasets, albeit at the cost of low-quality texture
generation due to the limited texture diversity in 3D datasets. To harness the
advantages of both approaches, we propose Bidirectional Diffusion(BiDiff), a
unified framework that incorporates both a 3D and a 2D diffusion process, to
preserve both 3D fidelity and 2D texture richness, respectively. Moreover, as a
simple combination may yield inconsistent generation results, we further bridge
them with novel bidirectional guidance. In addition, our method can be used as
an initialization of optimization-based models to further improve the quality
of 3D model and efficiency of optimization, reducing the generation process
from 3.4 hours to 20 minutes. Experimental results have shown that our model
achieves high-quality, diverse, and scalable 3D generation. Project website:
https://bidiff.github.io/.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要