Prior-Driven NeRF: Prior Guided Rendering

ELECTRONICS(2023)

引用 0|浏览6
暂无评分
摘要
Neural radiation field (NeRF)-based novel view synthesis methods are gaining popularity. NeRF can generate more detailed and realistic images than traditional methods. Conventional NeRF reconstruction of a room scene requires at least several hundred images as input data and generates several spatial sampling points, placing a tremendous burden on the training and prediction process with respect to memory and computational time. To address these problems, we propose a prior-driven NeRF model that only accepts sparse views as input data and reduces a significant number of non-functional sampling points to improve training and prediction efficiency and achieve fast high-quality rendering. First, this study uses depth priors to guide sampling, and only a few sampling points near the controllable range of the depth prior are used as input data, which reduces the memory occupation and improves the efficiency of training and prediction. Second, this study encodes depth priors as distance weights into the model and guides the model to quickly fit the object surface. Finally, a novel approach combining the traditional mesh rendering method (TMRM) and the NeRF volume rendering method was used to further improve the rendering efficiency. Experimental results demonstrated that our method had significant advantages in the case of sparse input views (11 per room) and few sampling points (8 points per ray).
更多
查看译文
关键词
scene representation,neural radiance field,novel view synthesis,depth priors,rendering accelerations,hybrid rendering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要