SGFusion: A saliency guided deep-learning framework for pixel-level image fusion.

Inf. Fusion(2023)

Cited 8|Views32
No score
Abstract
Pixel-level image fusion, which merges different modal images into an informative image, has attracted more and more attention. Despite many methods that have been proposed for pixel-level image fusion, there is a lack of effective image fusion methods that can simultaneously deal with different tasks. To address this problem, we propose a saliency guided deep-learning framework for pixel-level image fusion called SGFusion, which is an end-to-end fusion network and can be applied to a variety of fusion tasks by training one model. In specific, the proposed network uses the dual-guided encoding, image reconstruction decoding, and the saliency detection decoding processes to simultaneously extract the feature maps and saliency maps in different scales from the image. The saliency detection decoding is used as fusion weights to merge the features of image reconstruction decoding for generating the fusion image, which can effectively extract meaningful information from the source images and make the fusion image more in line with visual perception. Experiments indicate that the proposed fusion method achieves state-of-the-art performance in infrared and visible image fusion, multi-exposure image fusion, and medical image fusion on various public datasets.
More
Translated text
Key words
Pixel-level image fusion,Fusion weight,Deep learning,Saliency detection
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined