4Dynamic: Text-to-4D Generation with Hybrid Priors
arxiv(2024)
摘要
Due to the fascinating generative performance of text-to-image diffusion
models, growing text-to-3D generation works explore distilling the 2D
generative priors into 3D, using the score distillation sampling (SDS) loss, to
bypass the data scarcity problem. The existing text-to-3D methods have achieved
promising results in realism and 3D consistency, but text-to-4D generation
still faces challenges, including lack of realism and insufficient dynamic
motions. In this paper, we propose a novel method for text-to-4D generation,
which ensures the dynamic amplitude and authenticity through direct supervision
provided by a video prior. Specifically, we adopt a text-to-video diffusion
model to generate a reference video and divide 4D generation into two stages:
static generation and dynamic generation. The static 3D generation is achieved
under the guidance of the input text and the first frame of the reference
video, while in the dynamic generation stage, we introduce a customized SDS
loss to ensure multi-view consistency, a video-based SDS loss to improve
temporal consistency, and most importantly, direct priors from the reference
video to ensure the quality of geometry and texture. Moreover, we design a
prior-switching training strategy to avoid conflicts between different priors
and fully leverage the benefits of each prior. In addition, to enrich the
generated motion, we further introduce a dynamic modeling representation
composed of a deformation network and a topology network, which ensures dynamic
continuity while modeling topological changes. Our method not only supports
text-to-4D generation but also enables 4D generation from monocular videos. The
comparison experiments demonstrate the superiority of our method compared to
existing methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要