FedLC: Accelerating Asynchronous Federated Learning in Edge Computing.

IEEE Trans. Mob. Comput.(2024)

引用 0|浏览4
暂无评分
摘要
Federated Learning (FL) has been widely adopted to process the enormous data in the application scenarios like Edge Computing (EC). However, the commonly-used synchronous mechanism in FL may incur unacceptable waiting time for heterogeneous devices, leading to a great strain on the devices' constrained resources. In addition, the alternative asynchronous FL is known to suffer from the model staleness, which will lead to performance degradation of the trained model, especially on non-i.i.d. data. In this paper, we design a novel asynchronous FL mechanism, named FedLC, to handle the non-i.i.d. issue in EC by enabling the local collaboration among edge devices. Specifically, apart from uploading the local model directly to the server, each device will transmit its gradient to the other devices with different data distributions for local collaboration, which can improve the model generality. We theoretically analyze the convergence rate of FedLC and obtain the quantitative relationship between convergence bound and local collaboration. We design an efficient algorithm utilizing demand-list to determine the set of devices receiving gradients from each device. To handle the model staleness, we further assign different learning rates for various devices according to their participation frequency. The extensive experimental results demonstrate the effectiveness of our proposed mechanism.
更多
查看译文
关键词
Asynchronous Federated Learning,Edge Computing,Non-i.i.d.,Local Collaboration.
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要