ProbIBR: Fast Image-Based Rendering with Learned Probability-Guided Sampling.

IEEE transactions on visualization and computer graphics(2024)

引用 0|浏览10
暂无评分
摘要
We present a general, fast, and practical solution for interpolating novel views of diverse real-world scenes given a sparse set of nearby views. Existing generic novel view synthesis methods rely on time-consuming scene geometry pre-computation or redundant sampling of the entire space for neural volumetric rendering, limiting the overall efficiency. Instead, we incorporate learned MVS priors into the neural volume rendering pipeline while improving the rendering efficiency by reducing sampling points under the guidance of depth probability distributions. Specifically, fewer but important points are sampled under the guidance of depth probability distributions extracted from the learned MVS architecture. Based on the learned probability-guided sampling, we develop a sophisticated neural volume rendering module that effectively integrates source view information with the learned scene structures. We further propose confidence-aware refinement to improve the rendering results in uncertain, occluded, and unreferenced regions. Moreover, we build a four-view camera system for holographic display and provide a real-time version of our framework for free-viewpoint experience, where novel view images of a spatial resolution of 512×512 can be rendered at around 20 fps on a single GTX 3090 GPU. Experiments show that our method achieves 15 to 40 times faster rendering compared to state-of-the-art baselines, with strong generalization capacity and comparable high-quality novel view synthesis performance.
更多
查看译文
关键词
View synthesis,image-based rendering,volume rendering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要