Chrome Extension
WeChat Mini Program
Use on ChatGLM

Modality-Consistent Attention for Visible-infrared Vehicle Re-identification

Qianqian Zhao, Jiajun Su,Jianqing Zhu, Liu,Huanqiang Zeng

IEEE Signal Processing Letters(2024)

Cited 0|Views5
No score
Abstract
Visible-infrared vehicle re-identification (VIVR) seeks to match vehicle images of the same identity taken by cameras of different modalities. The noticeable disparity between visible and infrared modalities leads to attention deviations, causing deep models to incorrectly focus on different local regions of vehicles in visible and infrared images. We observed that the spatial distributions of distinguishing local regions, such as logos, front windows, and wheels, exhibit similarity in average images obtained from both visible and infrared images. Based on this, we propose a modality-consistent attention (MCA) approach for VIVR. Unlike image-level attention, our MCA is identity-level attention that holistically emphasizes the distinguishing regions of a vehicle identity across multiple images captured from various viewpoints. Furthermore, we constrain the differences between the identity-level spatial attention masks resulting from visible and infrared modalities. This approach helps deep networks focus consistently on learning the distinguishing local characteristics of vehicles across different modalities and viewpoints. Our experiments on RGBN300 and MSVR310 datasets demonstrate that our approach achieves state-of-the-art performance.
More
Translated text
Key words
visible-infrared vehicle re-identification,deep learning,modality-consistent attention
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined