Lit me up: A reference free adaptive low light image enhancement for in-the-wild conditions

Pattern Recognition(2024)

Cited 0|Views6
No score
Abstract
Images captured with different devices in uneven conditions (e.g., invariable lighting, low lighting, weather changes, exposure time, etc.) often lead to low image visibility and poor color and contrast, affecting the performance of computer vision and pattern recognition applications. The pre-trained convolutional neural networks (CNNs) solely rely on the training data and lack adaptation due to the uncertainty in the lighting conditions. Moreover, capturing large-scale datasets to train CNNs also raises the computational complexity and overall cost. This work integrates the knowledge and data and proposes a two-stage Uneven-to-Enliven network (U2E-Net), which rapidly learns to see in uneven conditions. A multiple-layered Uneven network learns to distinguish the reflection and illumination in the input images, and an encoder–decoder-based Enliven-Net contextualizes the illumination information. A key component in such ill-posed problems is to obtain information from priors and pairs; however, we present the compelling idea of information trade-off followed by decomposition consistency, thereby progressively improving the visual quality with the subsequent enhancement operations. To this end, we proposed a two-faceted framework that can work without depending on the data type. A novel color and contrast preservation strategy (CPS) is proposed following the decomposition of input data. CPS is integrated within the network to extract contrast in the darkest background regions.
More
Translated text
Key words
Uneven lighting,Image enhancement,Adaptive learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined