Shape transformer nets: Generating viewpoint-invariant 3D shapes from a single image

Journal of Visual Communication and Image Representation(2021)

引用 1|浏览12
暂无评分
摘要
Single-view 3D shapes generation has achieved great success in recent years. However, current methods always blind the learning of shapes and viewpoints. The generated shape only fit the observed viewpoints and would not be optimal from unknown viewpoints. In this paper, we propose a novel encoder–decoder based network which contains a disentangled transformer to generate the viewpoint-invariant 3D shapes. The differentiable and parametric Non-uniform B-spline (NURBS) surface generation and 3D-to-3D viewpoint transformation are incorporated to learn the viewpoint-invariant shape and the camera viewpoint, respectively. Our new framework allows us to learn the latent geometric parameters of shapes and viewpoints without knowing the ground truth viewpoint. That can simultaneously generate camera-viewpoint and viewpoint-invariant 3D shapes of the object. We analyze the effects of disentanglement and show both quantitative and qualitative results of shapes generated at various unknown viewpoints.
更多
查看译文
关键词
3D shape generation,Invariant viewpoint,Disentanglement,B-spline surfaces
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要