Unsupervised Image Prior via Prompt Learning and CLIP Semantic Guidance for Low-Light Image Enhancement
CoRR(2024)
摘要
Currently, low-light conditions present a significant challenge for machine
cognition. In this paper, rather than optimizing models by assuming that human
and machine cognition are correlated, we use zero-reference low-light
enhancement to improve the performance of downstream task models. We propose to
improve the zero-reference low-light enhancement method by leveraging the rich
visual-linguistic CLIP prior without any need for paired or unpaired
normal-light data, which is laborious and difficult to collect. We propose a
simple but effective strategy to learn prompts that help guide the enhancement
method and experimentally show that the prompts learned without any need for
normal-light data improve image contrast, reduce over-enhancement, and reduce
noise over-amplification. Next, we propose to reuse the CLIP model for semantic
guidance via zero-shot open vocabulary classification to optimize low-light
enhancement for task-based performance rather than human visual perception. We
conduct extensive experimental results showing that the proposed method leads
to consistent improvements across various datasets regarding task-based
performance and compare our method against state-of-the-art methods, showing
favorable results across various low-light datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要