Dual Attention Feature Fusion for Visible-Infrared Object Detection

ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VII(2023)

Cited 0|Views10
No score
Abstract
Feature fusion is an essential component of multimodal object detection to exploit the complementary information and common information between multi-source images. When it comes to visible-infrared image pairs, however, the visible images are prone to illumination and visibility and there may be a lot of interference information and little useful information. We suggest performing common feature enhancement and spatial cross attention sequentially to solve this problem. For this purpose, a novel Dual Attention Transformer Feature Fusion (DATFF) module which is designed for feature fusion of intermediate feature maps is proposed. We integrate it into two-stream object detectors and achieve state-of-the-art performance on DroneVehicle and FLIR visible-infrared object detection datasets. Our code is available at https://github.com/a21401624/DATFF.
More
Translated text
Key words
Feature fusion,Visible-infrared,Object detection
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined