Turning backdoors for efficient privacy protection against image retrieval violations

Information Processing & Management(2023)

引用 0|浏览17
暂无评分
摘要
Image retrieval, empowered by deep metric learning, is undoubtedly a building block in today’s media-sharing practices, but it also poses a severe risk of digging user privacy via retrieval. State-of-the-art countermeasures are built on adversarial learning, which would spoil the image-sharing mood with significant latency. To relieve the cumbersome experience of such data-centric approaches, we propose a plug-and-play privacy-preserving design (MIP) against image retrieval violations by exploring the rule-based triggering characteristics of model backdoors. The basic idea is to inject a privacy-preserving backdoor into the global retrieval model via backdoor learning, thus preventing shared images with such triggers from being searched. At its core, two types of triplet loss functions are invented, namely, imperceptible loss for normal retrieval performance and privacy-sensitive loss for disturbing retrieval with deliberate privacy backdoor injection. Extensive experiments on four widely used, realistic datasets showcase that MIP provides an outstanding privacy-preserving (backdoor) success rate, e.g., the poisoned retrieval mAP could be reduced to 0.33% (98.12%↓) in CUB-200, 0.04% (99.84%↓) in In-Shop, 0.64% (99.59%↓) in CARS196 and 0.01% (99.98%↓) in SOP, respectively, while maintaining similar normal retrieval performance (average 0.02%↓); provides a superior efficiency (7 orders of latency reduction) than the baselines. Besides, as a model-centric solution, MIP yields imperceptible visual changes and is demonstrated to resist potential black-box defenses (e.g., image filtering) and white-box defenses (e.g., fine-pruning). The code and data will be made available at https://github.com/lqsunshine/MIP.
更多
查看译文
关键词
image retrieval violations,backdoors,efficient privacy protection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要