Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift

AAAI 2024(2024)

引用 0|浏览3
暂无评分
摘要
Diffusion models (DM) have become state-of-the-art generative models because of their capability of generating high-quality images from noises without adversarial training. However, they are vulnerable to backdoor attacks as reported by recent studies. When a data input (e.g., some Gaussian noise) is stamped with a trigger (e.g., a white patch), the backdoored model always generates the target image (e.g., an improper photo). However, effective defense strategies to mitigate backdoors from DMs are underexplored. To bridge this gap, we propose the first backdoor detection and removal framework for DMs. We evaluate our framework Elijah on over hundreds of DMs of 3 types including DDPM, NCSN and LDM, with 13 samplers against 3 existing backdoor attacks. Extensive experiments show that our approach can have close to 100% detection accuracy and reduce the backdoor effects to close to zero without significantly sacrificing the model utility.
更多
查看译文
关键词
ML: Adversarial Learning & Robustness,CV: Adversarial Attacks & Robustness,ML: Deep Generative Models & Autoencoders,PEAI: Safety, Robustness & Trustworthiness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要