Dual-Semantic Consistency Learning for Visible-Infrared Person Re-Identification.

IEEE Transactions on Information Forensics and Security(2023)

引用 2|浏览0
暂无评分
摘要
Visible-Infrared person Re-Identification (VI-ReID) conducts comprehensive identity analysis on non-overlapping visible and infrared camera sets for intelligent surveillance systems, which face huge instance variations derived from modality discrepancy. Existing methods employ different kinds of network structure to extract modality-invariant features. Differently, we propose a novel framework, named Dual-Semantic Consistency Learning Network (DSCNet), which attributes modality discrepancy to channel-level semantic inconsistency. DSCNet optimizes channel consistency from two aspects, fine-grained inter-channel semantics, and comprehensive inter-modality semantics. Furthermore, we propose Joint Semantics Metric Learning to simultaneously optimize the distribution of the channel-and-modality feature embeddings. It jointly exploits the correlation between channel-specific and modality-specific semantics in a fine-grained manner. We conduct a series of experiments on the SYSU-MM01 and RegDB datasets, which validates that DSCNet delivers superiority compared with current state-of-the-art methods. On the more challenging SYSU-MM01 dataset, our network can achieve 73.89% Rank-1 accuracy and 69.47% mAP value. Our code is available at https://github.com/bitreidgroup/DSCNet.
更多
查看译文
关键词
Semantics,Feature extraction,Cameras,Task analysis,Measurement,Training,Image color analysis,Visible-infrared person re-identification,person re-identification,semantic consistency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要