Intermediary-Guided Bidirectional Spatial–Temporal Aggregation Network for Video-Based Visible-Infrared Person Re-Identification

IEEE Transactions on Circuits and Systems for Video Technology(2023)

引用 1|浏览14
暂无评分
摘要
This work focuses on the task of Video-based Visible-Infrared Person Re-Identification, a promising technique for achieving 24-hour surveillance systems. Two main issues in this field are modality discrepancy mitigating and spatial–temporal information mining. In this work, we propose a novel method, named Intermediary-guided Bidirectional spatial–temporal Aggregation Network (IBAN), to address both issues at once. Specifically, IBAN is designed to learn modality-irrelevant features by leveraging the anaglyph data of pedestrian images to serve as the intermediary. Furthermore, a bidirectional spatial–temporal aggregation module is introduced to exploit the spatial–temporal information of video data, while mitigating the impact of noisy image frames. Finally, we design an Easy-sample-based loss to guide the final embedding space and further improve the model’s generalization performance. Extensive experiments on Video-based Visible-Infrared benchmarks show that IBAN achieves promising results and outperforms the state-of-the-art ReID methods by a large margin, improving the rank-1/mAP by $1.29\%/3.46\%$ at the Infrared to Visible situation, and by $5.04\%/3.27\%$ at the Visible to Infrared situation. The source code of the proposed method will be released at https://github.com/lhf12278/IBAN .
更多
查看译文
关键词
Visible-infrared person re-identification,bidirectional spatial-temporal aggregation,anaglyph data,modality discrepancy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要