MSI-NeRF: Linking Omni-Depth with View Synthesis through Multi-Sphere Image aided Generalizable Neural Radiance Field
CoRR(2024)
摘要
Panoramic observation using fisheye cameras is significant in robot
perception, reconstruction, and remote operation. However, panoramic images
synthesized by traditional methods lack depth information and can only provide
three degrees-of-freedom (3DoF) rotation rendering in virtual reality
applications. To fully preserve and exploit the parallax information within the
original fisheye cameras, we introduce MSI-NeRF, which combines deep learning
omnidirectional depth estimation and novel view rendering. We first construct a
multi-sphere image as a cost volume through feature extraction and warping of
the input images. It is then processed by geometry and appearance decoders,
respectively. Unlike methods that regress depth maps directly, we further build
an implicit radiance field using spatial points and interpolated 3D feature
vectors as input. In this way, we can simultaneously realize omnidirectional
depth estimation and 6DoF view synthesis. Our method is trained in a
semi-self-supervised manner. It does not require target view images and only
uses depth data for supervision. Our network has the generalization ability to
reconstruct unknown scenes efficiently using only four images. Experimental
results show that our method outperforms existing methods in depth estimation
and novel view synthesis tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要