Monocular Identity-Conditioned Facial Reflectance Reconstruction
CVPR 2024(2024)
摘要
Recent 3D face reconstruction methods have made remarkable advancements, yet
there remain huge challenges in monocular high-quality facial reflectance
reconstruction. Existing methods rely on a large amount of light-stage captured
data to learn facial reflectance models. However, the lack of subject diversity
poses challenges in achieving good generalization and widespread applicability.
In this paper, we learn the reflectance prior in image space rather than UV
space and present a framework named ID2Reflectance. Our framework can directly
estimate the reflectance maps of a single image while using limited reflectance
data for training. Our key insight is that reflectance data shares facial
structures with RGB faces, which enables obtaining expressive facial prior from
inexpensive RGB data thus reducing the dependency on reflectance data. We first
learn a high-quality prior for facial reflectance. Specifically, we pretrain
multi-domain facial feature codebooks and design a codebook fusion method to
align the reflectance and RGB domains. Then, we propose an identity-conditioned
swapping module that injects facial identity from the target image into the
pre-trained autoencoder to modify the identity of the source reflectance image.
Finally, we stitch multi-view swapped reflectance images to obtain renderable
assets. Extensive experiments demonstrate that our method exhibits excellent
generalization capability and achieves state-of-the-art facial reflectance
reconstruction results for in-the-wild faces. Our project page is
https://xingyuren.github.io/id2reflectance/.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要