Fooling Contrastive Language-Image Pre-trained Models with CLIPMasterPrints
arxiv(2023)
摘要
Models leveraging both visual and textual data such as Contrastive
Language-Image Pre-training (CLIP), are the backbone of many recent advances in
artificial intelligence. In this work, we show that despite their versatility,
such models are vulnerable to what we refer to as fooling master images.
Fooling master images are capable of maximizing the confidence score of a CLIP
model for a significant number of widely varying prompts, while being either
unrecognizable or unrelated to the attacked prompts for humans. The existence
of such images is problematic as it could be used by bad actors to maliciously
interfere with CLIP-trained image retrieval models in production with
comparably small effort as a single image can attack many different prompts. We
demonstrate how fooling master images for CLIP (CLIPMasterPrints) can be mined
using stochastic gradient descent, projected gradient descent, or blackbox
optimization. Contrary to many common adversarial attacks, the blackbox
optimization approach allows us to mine CLIPMasterPrints even when the weights
of the model are not accessible. We investigate the properties of the mined
images, and find that images trained on a small number of image captions
generalize to a much larger number of semantically related captions. We
evaluate possible mitigation strategies, where we increase the robustness of
the model and introduce an approach to automatically detect CLIPMasterPrints to
sanitize the input of vulnerable models. Finally, we find that vulnerability to
CLIPMasterPrints is related to a modality gap in contrastive pre-trained
multi-modal networks. Code available at
https://github.com/matfrei/CLIPMasterPrints.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要