Fast High Dynamic Range Radiance Fields for Dynamic Scenes

CoRR(2024)

Cited 0|Views25
No score
Abstract
Neural Radiances Fields (NeRF) and their extensions have shown great success in representing 3D scenes and synthesizing novel-view images. However, most NeRF methods take in low-dynamic-range (LDR) images, which may lose details, especially with nonuniform illumination. Some previous NeRF methods attempt to introduce high-dynamic-range (HDR) techniques but mainly target static scenes. To extend HDR NeRF methods to wider applications, we propose a dynamic HDR NeRF framework, named HDR-HexPlane, which can learn 3D scenes from dynamic 2D images captured with various exposures. A learnable exposure mapping function is constructed to obtain adaptive exposure values for each image. Based on the monotonically increasing prior, a camera response function is designed for stable learning. With the proposed model, high-quality novel-view images at any time point can be rendered with any desired exposure. We further construct a dataset containing multiple dynamic scenes captured with diverse exposures for evaluation. All the datasets and code are available at .
More
Translated text
Key words
Dynamic Range,Dynamic Scenes,Radiance Field,Dynamic Imaging,Image Capture,Exposure Values,Static Scenes,Non-uniform Illumination,Input Image,Total Loss,Point Cloud,Airplane,Log Values,Explicit Model,Volume Density,Spatial Density,Ground Truth Image,Camera Pose,Scene Representation,Voxel Grid,High Dynamic Range Image,View Synthesis,Camera Pose Estimation,Consistency Assumption,Half Of The Image
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined