Reconstruction of Neural Radiance Fields With Vivid Scenes in the Metaverse.

Weipeng Jing , Shijie Wang, Wenjun Zhang, Chao Li

IEEE Trans. Consumer Electron.(2024)

引用 0|浏览0
暂无评分
摘要
With the rapid development of the metaverse, AR/VR technology and consumer electronics have emerged as crucial drivers for future virtual experiences. In this context, realistic 3D scene modeling and rendering has become particularly critical. Neural Radiance Fields (NeRF), as a deep learning-based method, have made significant progress in the reconstruction and rendering of real-world 3D scenes. However, NeRF still faces challenges in efficiently handling high-frequency information within the object, leading to issues such as blurry details and artifacts. To address this issue and provide more immersive virtual experiences, we propose Vivid-NeRF, which offers benefits to both metaverse and AR/VR technologies. Vivid-NeRF extracts information at different frequencies from the image and fully exploits the high-frequency information in the 3D feature generation process to obtain more realistic details. In addition, we propose frequency-based sampling to increase the sampling of high-frequency components. Finally, we merge frequency information with viewpoint features obtained through frequency-based sampling to enhance the model’s capability in expressing scene details. With these improvements, Vivid-NeRF significantly reduces surface blur and accurately captures and reproduces the smooth surface appearance. We conduct experiments on the Blender and Shiny-Blender datasets. Experimental results demonstrate that Vivid-NeRF achieves PSNR of 35.00 and 34.01, SSIM of 0.976 and 0.970, and LPIPS of 0.033 and 0.064. Both quantitative and qualitative assessments demonstrate that our approach outperforms the previous state-of-the-art (SOTA) ABLE-NeRF.
更多
查看译文
关键词
Metaverse,AR,VR,NeRF,Scene Reconstruction,Wavelet Transform
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要