Efficient Parameter Aggregation in Federated Learning with Hybrid Convergecast

2021 IEEE 18th Annual Consumer Communications & Networking Conference (CCNC)(2021)

引用 1|浏览3
暂无评分
摘要
In federated learning, workers train local models with their private data sets and only upload local gradients to the remote aggregator. Data privacy is well preserved and parallelism is achieved. In large-scale deep learning tasks, however, frequent interactions between workers and the aggregator to transmit parameters can cause tremendous degradation of system performance in terms of communication costs, the needed number of iterations, the latency of each iteration and the accuracy of the trained model because of system “churns” (i.e., devices frequently joining and leaving the network). Existing research leverages different network topologies to improve the performance of federated learning. In this paper, we propose a novel hybrid network topology design that integrates ring (R) and n-ary tree (T) to provide flexible and adaptive convergecast in federated learning. Specifically, multiple participated peers within one-hop are formed as a local ring to adapt to device dynamics (i.e., “churns”) and carry out local cooperation shuffling; an n-ary convergecast tree is formed from local rings to the aggregator to assure the communication efficiency. Theoretical analysis shows the superiority of the proposed hybrid (R+T) convergecast design in terms of system latency as compared to existing topologies. Prototype-based simulation on CloudLab shows that the hybrid (R+T) design is able to reduce the rounds of iterations while achieving the best model accuracy under system “churns” as compared to the state of the art.
更多
查看译文
关键词
Federated Learning,Parallel Machine Learning,Convergecast,n-ary Tree,Ring,CloudLab
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要