Source-free Style-diversity Adversarial Domain Adaptation with Privacy-preservation for person re-identification

KNOWLEDGE-BASED SYSTEMS(2024)

引用 0|浏览63
暂无评分
摘要
Unsupervised domain adaptation (UDA) techniques for person re-identification (ReID) have been extensively studied to facilitate the transfer of knowledge from labeled source domains to unlabeled target domains. However, the need to access the source data raises privacy concerns in real-world scenarios. To overcome this limitations, source-free domain adaptation (SFDA) was introduced, enabling adaptation without requiring access to the source data, relying instead on a well-trained source model. Nevertheless, existing SFDA methods assume a shared label space and overlook the significance of domain-style discrepancies in person ReID, limiting their applicability to source-free domain adaptive person ReID. In this paper, we present a novel approach called Source-free Style-diversity Adversarial Domain Adaptation with Privacy-preservation (S2ADAP) for person ReID to address these challenges. Our approach effectively handles inter-domain pedestrian appearance style differences using GAN-based domain-style diversity augmentation and intradomain individual style misalignment through adversarial mutual teaching learning, avoiding access to data from the source domain. We leverage a pre-trained model as a person appearance style encoder to enhance source-similar style diversity in the target domain and achieve intra-domain individual style alignment by introducing the domain style discriminator to promote the discriminability of person semantic features for domain adaptation. The experimental results on publicly available person ReID datasets affirm the efficacy of our approach, offering a promising and privacy-preserving solution for person ReID tasks.
更多
查看译文
关键词
Person re-identification,Source free domain adaptation,Privacy preservation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要