TC4D: Trajectory-Conditioned Text-to-4D Generation
arxiv(2024)
摘要
Recent techniques for text-to-4D generation synthesize dynamic 3D scenes
using supervision from pre-trained text-to-video models. However, existing
representations for motion, such as deformation models or time-dependent neural
representations, are limited in the amount of motion they can generate-they
cannot synthesize motion extending far beyond the bounding box used for volume
rendering. The lack of a more flexible motion model contributes to the gap in
realism between 4D generation methods and recent, near-photorealistic video
generation models. Here, we propose TC4D: trajectory-conditioned text-to-4D
generation, which factors motion into global and local components. We represent
the global motion of a scene's bounding box using rigid transformation along a
trajectory parameterized by a spline. We learn local deformations that conform
to the global trajectory using supervision from a text-to-video model. Our
approach enables the synthesis of scenes animated along arbitrary trajectories,
compositional scene generation, and significant improvements to the realism and
amount of generated motion, which we evaluate qualitatively and through a user
study. Video results can be viewed on our website:
https://sherwinbahmani.github.io/tc4d.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要