Accelerating Decentralized Federated Learning in Heterogeneous Edge Computing

IEEE Transactions on Mobile Computing(2023)

Cited 35|Views20
No score
Abstract
In edge computing (EC), federated learning (FL) enables massive devices to collaboratively train AI models without exposing local data. In order to avoid the possible bottleneck of the parameter server (PS) architecture, we concentrate on the decentralized federated learning (DFL), which adopts peer-to-peer (P2P) communication without maintaining a global model. However, due to the intrinsic features of EC, e.g., resource limitation and heterogeneity, network dynamics and non-IID data, DFL with a fixed P2P topology and/or an identical model compression ratio for all workers results in a slow convergence rate. In this paper, we propose an efficient algorithm (termed CoCo ) to accelerate DFL by integrating optimization of topology Co nstruction and model Co mpression. Concretely, we adaptively construct P2P topology and determine specific compression ratios for each worker to conquer the system dynamics and heterogeneity under bandwidth constraints. To reflect how the non-IID data influence the consistency of local models in DFL, we introduce the consensus distance , i.e., the discrepancy between local models, as the quantitative metric to guide the fine-grained operations of the joint optimization. Extensive simulations and testbed experiments show that CoCo achieves 10× speedup, and reduces the communication cost by about $50\%$ on average, compared with the existing DFL baselines.
More
Translated text
Key words
federated learning,heterogeneous edge computing
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined