Two Exposure Fusion Using Prior-Aware Generative Adversarial Network

IEEE TRANSACTIONS ON MULTIMEDIA(2022)

引用 9|浏览18
暂无评分
摘要
Producing a high dynamic range (HDR) image from two low dynamic range (LDR) images with extreme exposures is challenging due to the lack of well-exposed contents. Existing works either use pixel fusion based on weighted quantization or conduct feature fusion using deep learning techniques. In contrast to these methods, our core idea is to progressively incorporate the pixel domain knowledge of LDR images into the feature fusion process. Specifically, we propose a novel Prior-Aware Generative Adversarial Network (PA-GAN), along with a new dual-level loss for two exposure fusion. The proposed PA-GAN is composed of a content-prior-guided encoder and a detail-prior-guided decoder, respectively in charge of content fusion and detail calibration. We further train the network using a dual-level loss that combines the semantic-level loss and pixel-level loss. Extensive qualitative and quantitative evaluations on diverse image datasets demonstrate that our proposed PA-GAN has superior performance than state-of-the-art methods.
更多
查看译文
关键词
Semantics,Decoding,Generative adversarial networks,Dynamic range,Quantization (signal),Calibration,Image fusion,High dynamic range image,exposure fusion,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要