PerDoor: Persistent Backdoors in Federated Learning using Adversarial Perturbations.

COINS(2023)

引用 0|浏览16
暂无评分
摘要
Federated Learning (FL) enables numerous par-ticipants to train deep learning models collaboratively without exposing sensitive personal data. However, distributed nature of FL and unvetted data makes it vulnerable to backdoor attacks by injecting malicious functionality into the centralized model during training, causing desired misclassifications for specific adversary-chosen inputs. Prior works established successful back-door injection in FL systems; however, these are not demon-strated to be long-lasting. Backdoor functionality does not survive if the adversary is prevented from training since the centralized model continuously mutates during successive FL rounds. This work proposes PerDoor, a persistent-by-construction backdoor injection technique for FL, driven by adversarial perturbation and targeting parameters of the centralized model deviating less in successive FL rounds and contributing the least to main task accuracy. Exhaustive evaluation considering image classification scenarios portrays up to 8.2x persistence by PerDoor compared to state-of-the-art backdoor attacks in FL and exhibits its potency against state-of-the-art backdoor prevention methods.
更多
查看译文
关键词
Backdoor Attacks,Adversarial Perturbation,Federated Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要