A Local Appearance Model for Volumetric Capture of Diverse Hairstyle
CoRR(2023)
摘要
Hair plays a significant role in personal identity and appearance, making it
an essential component of high-quality, photorealistic avatars. Existing
approaches either focus on modeling the facial region only or rely on
personalized models, limiting their generalizability and scalability. In this
paper, we present a novel method for creating high-fidelity avatars with
diverse hairstyles. Our method leverages the local similarity across different
hairstyles and learns a universal hair appearance prior from multi-view
captures of hundreds of people. This prior model takes 3D-aligned features as
input and generates dense radiance fields conditioned on a sparse point cloud
with color. As our model splits different hairstyles into local primitives and
builds prior at that level, it is capable of handling various hair topologies.
Through experiments, we demonstrate that our model captures a diverse range of
hairstyles and generalizes well to challenging new hairstyles. Empirical
results show that our method improves the state-of-the-art approaches in
capturing and generating photorealistic, personalized avatars with complete
hair.
更多查看译文
关键词
Point Cloud,Sparse Point,Sparse Point Cloud,Training Set,Training Data,Fine-tuned,3D Space,Additional Input,Skip Connections,Separate Layers,Training Objective,Universal Model,Ground Truth Image,Semantic Labels,Sparse Structure,RGB Values,Representation Of Composition,Color Points,Multi-view Images,View Synthesis,Accurate Capture,Camera Center,Hair Strands
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要