Physically-guided Disentangled Implicit Rendering for 3D Face Modeling

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 4|浏览38
暂无评分
摘要
This paper presents a novel Physically-guided Disentangled Implicit Rendering (PhyDIR) framework for highfidelity 3D face modeling. The motivation comes from two observations: Widely-used graphics renderers yield excessive approximations against photo-realistic imaging, while neural rendering methods produce superior appearances but are highly entangled to perceive 3D-aware operations. Hence, we learn to disentangle the implicit rendering via explicit physical guidance, while guaranteeing the properties of: (1) 3D-aware comprehension and (2) high-reality image formation. For the former one, PhyDIR explicitly adopts 3D shading and rasterizing modules to control the renderer, which disentangles the light, facial shape, and viewpoint from neural reasoning. Specifically, PhyDIR proposes a novel multi-image shading strategy to compensate for the monocular limitation, so that the lighting variations are accessible to the neural renderer. For the latter, PhyDIR learns the face-collection implicit texture to avoid ill-posed intrinsic factorization, then leverages a series of consistency losses to constrain the rendering robustness. With the disentangled method, we make 3D face modeling benefit from both kinds of rendering strategies. Extensive experiments on benchmarks show that PhyDIR obtains superior performance than state-of-the-art explicit/implicit methods on geometry/texture modeling.
更多
查看译文
关键词
Face and gestures, 3D from single images
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要