The Fusion of Infrared and Visible Images via Feature Extraction and Subwindow Variance Filtering

Xin Feng,Haifeng Gong

Journal of Electrical and Computer Engineering(2024)

Cited 0|Views0
No score
Abstract
This paper presents a subwindow variance filtering algorithm for fusing infrared and visible light images, with the goal of addressing challenges related to blurred details, low contrast, and missing edge features. First, images to be fused are subjected to multilevel decomposition using a subwindow variance filter, resulting in corresponding base and multiple detail layers. PCANet extracts features from the base layer and obtains corresponding weight maps that guide the fusion process. A saliency measurement method is proposed for detail-level fusion to extract saliency maps from the source image. The saliency maps should be compared in order to obtain the initial weight map, which is then optimized using guided filtering technology to guide the fusion of detail layers. Finally, the information of the base layer and the detail layer after fusion is superimposed to obtain an ideal fusion result. The proposed algorithm is evaluated through subjective and objective measures, including information entropy, mutual information, multiscale structural similarity measurement, standard deviation, and visual information fidelity. The results demonstrate that the proposed algorithm achieves rich detail information, high contrast, and good edge information retention, making it a promising approach for infrared and visible image fusion.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined