Mitigating Motion Blur in Neural Radiance Fields with Events and Frames
CVPR 2024(2024)
摘要
Neural Radiance Fields (NeRFs) have shown great potential in novel view
synthesis. However, they struggle to render sharp images when the data used for
training is affected by motion blur. On the other hand, event cameras excel in
dynamic scenes as they measure brightness changes with microsecond resolution
and are thus only marginally affected by blur. Recent methods attempt to
enhance NeRF reconstructions under camera motion by fusing frames and events.
However, they face challenges in recovering accurate color content or constrain
the NeRF to a set of predefined camera poses, harming reconstruction quality in
challenging conditions. This paper proposes a novel formulation addressing
these issues by leveraging both model- and learning-based modules. We
explicitly model the blur formation process, exploiting the event double
integral as an additional model-based prior. Additionally, we model the
event-pixel response using an end-to-end learnable response function, allowing
our method to adapt to non-idealities in the real event-camera sensor. We show,
on synthetic and real data, that the proposed approach outperforms existing
deblur NeRFs that use only frames as well as those that combine frames and
events by +6.13dB and +2.48dB, respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要