BPFL: Blockchain-Based Privacy-Preserving Federated Learning against Poisoning Attack

Information Sciences(2024)

引用 0|浏览8
暂无评分
摘要
In federated learning (FL), multiple clients use local datasets to train models and submit local gradients to the server for aggregation. However, malicious clients may compromise the performance of the model by submitting poisonous gradients. Moreover, the clients do not want to reveal their training models in most application scenarios since their private data may be inferred from them. In addition, most of FL protocols lack an incentive mechanism to supervise participants and cannot punish malicious participants, which is unfair to honest participants. To tackle these problems, we propose a blockchain-based privacy-preserving federated learning against poisoning attack (BPFL). In BPFL, a blockchain-based incentive mechanism is constructed to supervise participants and promptly track malicious behaviors. BPFL can also protect the privacy of aggregated and local model if some participants are malicious, and detect poisonous data by computing cosine similarity between the aggregated gradient and local gradient of the client by using Paillier cryptosystem with threshold decryption. The experiments show that BPFL improves the accuracy of the model from 10% to 75% on CIFAR-10 against poisoning attacks, and therefore BPFL can effectively resist poisoning attacks based on the privacy of local and aggregated models.
更多
查看译文
关键词
Blockchain,Federated Learning,Poisoning Attack,Privacy-preserving
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要