Robust Federated Learning against Backdoor Attackers.

INFOCOM Workshops(2023)

引用 0|浏览2
暂无评分
摘要
Federated learning is a privacy-preserving alter-native for distributed learning with no involvement of data transfer. As the server does not have any control on clients' actions, some adversaries may participate in learning to introduce corruption into the underlying model. Backdoor attacker is one such adversary who injects a trigger pattern into the data to manipulate the model outcomes on a specific sub-task. This work aims to identify backdoor attackers and to mitigate their effects by isolating their weight updates. Leveraging the correlation between clients' gradients, we propose two graph theoretic algorithms to separate out attackers from the benign clients. Under a classification task, the experimental results show that our algorithms are effective and robust to the attackers who add backdoor trigger patterns at different location in targeted images. The results also evident that our algorithms are superior than existing methods especially when numbers of attackers are more than the normal clients.
更多
查看译文
关键词
Federated learning,backdoor,robustness,tar-geted attackers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要