Chrome Extension
WeChat Mini Program
Use on ChatGLM

AT-GAN: A generative adversarial network with attention and transition for infrared and visible image fusion

Information Fusion(2023)

Cited 14|Views58
No score
Abstract
Infrared and visible image fusion methods aim to combine high-intensity instances and detail texture features into fused images. However, the ability to capture compact features under various adverse conditions is limited because the distribution of these multimodal features is generally cluttered. Therefore, targeted designs are necessary to constrain multimodal features to be compact. In addition, many attempts are not robust for low-quality images under various adverse conditions and the high fusion time of most fusion methods leads to less effective subsequent vision tasks. To address these issues, we propose a generative adversarial network with intensity attention modules and semantic transition modules, termed AT-GAN, which are more efficient to extract key information from multimodal images. The intensity attention modules aim to keep infrared instance features clearly and semantic transition modules attempt to filter out noise or other redundant features in visible texture. Moreover, an adaptive fused equilibrium point can be learned by a quality assessment module. Finally, experiments with variety of datasets reveal that the AT-GAN can adaptively learn features fusion and image reconstruction synchronously and further improve the timeliness under premise of fusion superiority of the proposed method over state of the art.
More
Translated text
Key words
Infrared and visible images,Image fusion,Adverse conditions,Generative adversarial networks,Attention mechanism
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined