One-to-Multiple Clean-Label Image Camouflage (OmClic) based backdoor attack on deep learning

KNOWLEDGE-BASED SYSTEMS(2024)

引用 0|浏览18
暂无评分
摘要
Image camouflage has been utilized to create clean -label poisoned images for implanting backdoor into a DL model. But there exists a crucial limitation that one attack/poisoned image can only fit a single input size of the DL model, which greatly increases its attack budget when attacking multiple commonly adopted input sizes of DL models. This work proposes to constructively craft an attack image through camouflaging but can fit multiple DL models' input sizes simultaneously, namely OmClic. Thus, through OmClic, we are able to always implant a backdoor regardless of which common input size is chosen by the user to train the DL model given the same attack budget (i.e., a fraction of the poisoning rate). With our camouflaging algorithm formulated as a multi -objective optimization, M = 5 input sizes can be concurrently targeted with one attack image, which artifact is retained to be almost visually imperceptible at the same time. Extensive evaluations validate the proposed OmClic can reliably succeed in various settings using diverse types of images. Further experiments on OmClic based backdoor insertion to DL models show that high backdoor performances (i.e., attack success rate and clean data accuracy) are achievable no matter which common input size is randomly chosen by the user to train the model. So that the OmClic based backdoor attack budget is reduced by Mx compared to the state-of-the-art camouflage based backdoor attack as a baseline. Significantly, the same set of OmClic based poisonous attack images is transferable to different model architectures for backdoor implant.
更多
查看译文
关键词
Camouflage attack,One-to-multiple,Backdoor attack,Clean-label data poisoning,Machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要