HAHA: Highly Articulated Gaussian Human Avatars with Textured Mesh Prior
arxiv(2024)
摘要
We present HAHA - a novel approach for animatable human avatar generation
from monocular input videos. The proposed method relies on learning the
trade-off between the use of Gaussian splatting and a textured mesh for
efficient and high fidelity rendering. We demonstrate its efficiency to animate
and render full-body human avatars controlled via the SMPL-X parametric model.
Our model learns to apply Gaussian splatting only in areas of the SMPL-X mesh
where it is necessary, like hair and out-of-mesh clothing. This results in a
minimal number of Gaussians being used to represent the full avatar, and
reduced rendering artifacts. This allows us to handle the animation of small
body parts such as fingers that are traditionally disregarded. We demonstrate
the effectiveness of our approach on two open datasets: SnapshotPeople and
X-Humans. Our method demonstrates on par reconstruction quality to the
state-of-the-art on SnapshotPeople, while using less than a third of Gaussians.
HAHA outperforms previous state-of-the-art on novel poses from X-Humans both
quantitatively and qualitatively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要