Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication
CoRR(2024)
Abstract
Existing diffusion-based text-to-3D generation methods primarily focus on
producing visually realistic shapes and appearances, often neglecting the
physical constraints necessary for downstream tasks. Generated models
frequently fail to maintain balance when placed in physics-based simulations or
3D printed. This balance is crucial for satisfying user design intentions in
interactive gaming, embodied AI, and robotics, where stable models are needed
for reliable interaction. Additionally, stable models ensure that 3D-printed
objects, such as figurines for home decoration, can stand on their own without
requiring additional supports. To fill this gap, we introduce Atlas3D, an
automatic and easy-to-implement method that enhances existing Score
Distillation Sampling (SDS)-based text-to-3D tools. Atlas3D ensures the
generation of self-supporting 3D models that adhere to physical laws of
stability under gravity, contact, and friction. Our approach combines a novel
differentiable simulation-based loss function with physically inspired
regularization, serving as either a refinement or a post-processing module for
existing frameworks. We verify Atlas3D's efficacy through extensive generation
tasks and validate the resulting 3D models in both simulated and real-world
environments.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined