Gradient Leakage Attacks in Federated Learning: Research Frontiers, Taxonomy and Future Directions

Haomiao Yang,Mengyu Ge, Dongyun Xue, Kunlan Xiang,Hongwei Li,Rongxing Lu

IEEE Network(2023)

引用 0|浏览0
暂无评分
摘要
Federated learning (FL) is a distributed deep learning framework that has become increasingly popular in recent years. Essentially, FL supports numerous participants and the parameter server to co-train a deep learning model through shared gradients without revealing the private training data. Recent studies, however, have shown that a potential adversary (either the parameter server or participants) can recover private training data from the shared gradients, and such behavior is called gradient leakage attacks (GLAs). In this study, we first present an overview of FL systems and outline the GLA philosophy. We classify the existing GLAs into two paradigms: optimizationbased and analytics-based attacks. In particular, the optimizationbased approach defines the attack process as an optimization problem, whereas the analytics-based approach defines the attack as a problem of solving multiple linear equations. We present a comprehensive review of the state-of-the-art GLA algorithms followed by a detailed comparison. Based on the observations of the shortcomings of the existing optimization-based and analyticsbased methods, we devise a new generation-based GLA paradigm. We demonstrate the superiority of the proposed GLA in terms of data reconstruction performance and efficiency, thus posing a greater potential threat to federated learning protocols. Finally, we pinpoint a variety of promising future directions for GLA.
更多
查看译文
关键词
gradient leakage attacks,federated learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要