Communication-efficient and privacy-preserving large-scale federated learning counteracting heterogeneity

Xingcai Zhou, Guang Yang

INFORMATION SCIENCES(2024)

引用 0|浏览0
暂无评分
摘要
Federated learning is a commonly distributed framework for large-scale learning, where a model is learned over massively distributed remote devices without sharing information on devices. It has at least three key challenges: heterogeneity in federated networks, privacy and communication costs. In this paper, we propose three federated learning algorithms to handle these issues gradually. First, we introduce a FedSfDane algorithm (DANE with Shrinkage factor for Federated learning), which improves the inexact approximation of the full gradient, captures statistical heterogeneity and restrains systems heterogeneity across the devices. For avoiding possible privacy leakage in federated learning, a Privacy -preserving FedSfDane algorithm (PFedSfDane) is proposed, which is resistant to adversary attacks. Further, we give a novel Communication -efficient PFedSfDane (CPFedSfDane) algorithm for large-scale federated networks, which effectively handles the above three challenges. We give convergence guarantees for the three algorithms to convex and non -convex learning problems. Numerical experiments illustrate our algorithms outperform FedDANE, FedAvg and FedProx algorithms, especially for highly heterogeneous federated networks. CPFedSfDane improves the prediction accuracy of the state-of-the-art FedDANE algorithm by about 15.0% on sent140 dataset, and has high privacy protection and communication efficiency.
更多
查看译文
关键词
Federated learning,System heterogeneity,Data heterogeneity,Privacy protection,Communication costs
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要