Privacy-Preserving and Byzantine-Robust Federated Learning

IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING(2024)

引用 13|浏览12
暂无评分
摘要
Federated learning (FL) trains a model over multiple datasets by collecting the local models rather than raw data, which can help facilitate distributed data analysis in many real-world applications. Since the model parameters can leak information about the training datasets, it is necessary to preserve the privacy of the FL participants' local models. Furthermore, FL is vulnerable to poisoning attacks which can significantly decrease the model utility. To settle the above issues, we propose a privacy-preserving and Byzantine-robust FL scheme pi(P2Brofl) that maintains robustness in the presence of poisoning attacks and preserves the privacy of local models simultaneously. Specifically, pi(P2Brofl) leverages three-party computation (3 PC) to securely achieve a Byzantine-robust aggregation method. To improve the efficiency of privacy-preserving local model selection and aggregation, we propose a maliciously secure top-k protocol pi(top-k) that has low communication overhead. Moreover, we present an efficient maliciously secure shuffling protocol pi(shuffle) since secure shuffling is necessary for our secure top-k protocol. The security proof of the scheme is given and experiments on real-world datasets are conducted in this paper. When the proportion of Byzantine participants is 50%, the error rate of the model only increases by 1.05% while it increases by 23.78% without using our protection.
更多
查看译文
关键词
Byzantine-robust,federated learning,poisoning attacks,privacy-preservation,three -party computation (3 PC)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要