Chrome Extension
WeChat Mini Program
Use on ChatGLM

Infrared and visible image fusion using a feature attention guided perceptual generative adversarial network

J. Ambient Intell. Humaniz. Comput.(2022)

Cited 0|Views2
No score
Abstract
In recent years, the performance of infrared and visible image fusion has been dramatically improved by using deep learning techniques. However, the fusion results are still not satisfactory as the fused images frequently suffer from blurred details, unenhanced vital regions, and artifacts. To resolve these problems, we have developed a novel feature attention-guided perceptual generative adversarial network (FAPGAN) for fusing infrared and visible images. In FAPGAN, a feature attention module is proposed to incorporate with the generator aiming to produce a fused image that maintains the detailed information while highlighting the vital regions in the source images. Our feature attention module consists of spatial attention and pixel attention parts. The spatial attention aims to enhance the vital regions while the pixel attention aims to make the network focus on high frequency information to retain the detailed information. Furthermore, we introduce a perceptual loss combined with adversarial loss and content loss to optimize the generator. The perceptual loss is to make the fused image more similar to the source infrared image at the semantic level, which can not only make the fused image maintain the vital target and detailed information from the infrared image, but also remove the halo artifacts by reducing the discrepancy. Experimental results on public datasets demonstrate that our FAPGAN is superior to those of state-of-the-art approaches in both subjective visual effect and objective assessment.
More
Translated text
Key words
Image fusion,Deep learning,Feature extraction,Image processing
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined