Holoported Characters: Real-time Free-viewpoint Rendering of Humans from Sparse RGB Cameras
arxiv(2023)
摘要
We present the first approach to render highly realistic free-viewpoint
videos of a human actor in general apparel, from sparse multi-view recording to
display, in real-time at an unprecedented 4K resolution. At inference, our
method only requires four camera views of the moving actor and the respective
3D skeletal pose. It handles actors in wide clothing, and reproduces even
fine-scale dynamic detail, e.g. clothing wrinkles, face expressions, and hand
gestures. At training time, our learning-based approach expects dense
multi-view video and a rigged static surface scan of the actor. Our method
comprises three main stages. Stage 1 is a skeleton-driven neural approach for
high-quality capture of the detailed dynamic mesh geometry. Stage 2 is a novel
solution to create a view-dependent texture using four test-time camera views
as input. Finally, stage 3 comprises a new image-based refinement network
rendering the final 4K image given the output from the previous stages. Our
approach establishes a new benchmark for real-time rendering resolution and
quality using sparse input camera views, unlocking possibilities for immersive
telepresence.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要