Chrome Extension
WeChat Mini Program
Use on ChatGLM

FaceRefiner: High-Fidelity Facial Texture Refinement With Differentiable Rendering-Based Style Transfer.

Chengyang Li, Baoping Cheng, Yao Cheng, Haocheng Zhang,Renshuai Liu,Yinglin Zheng,Jing Liao ,Xuan Cheng

IEEE Trans. Multim.(2024)

Cited 0|Views4
No score
Abstract
Recent facial texture generation methods prefer to use deep networks to synthesize image content and then fill in the UV map, thus generating a compelling full texture from a single image. Nevertheless, the synthesized texture UV map usually comes from a space constructed by the training data or the 2D face generator, which limits the methods’ generalization ability for in-the-wild input images. Consequently, their facial details, structures and identity may not be consistent with the input. In this paper, we address this issue by proposing a style transfer-based facial texture refinement method named FaceRefiner. FaceRefiner treats the 3D sampled texture as style and the output of a texture generation method as content. The photo-realistic style is then expected to be transferred from the style image to the content image. Different from current style transfer methods that only transfer high and middle level information to the result, our style transfer method integrates differentiable rendering to also transfer low level (or pixel level) information in the visible face regions. The main benefit of such multi-level information transfer is that, the details, structures and semantics in the input can thus be well preserved. The extensive experiments on Multi-PIE, CelebA and FFHQ datasets demonstrate that our refinement method can improve the texture quality and the face identity preserving ability, compared with state-of-the-arts.
More
Translated text
Key words
facial texture generation,3D face reconstruction,style transfer
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined