Chrome Extension
WeChat Mini Program
Use on ChatGLM

Node Selection Toward Faster Convergence for Federated Learning on Non-IID Data

IEEE Transactions on Network Science and Engineering(2022)

Cited 30|Views72
No score
Abstract
Federated Learning (FL) is a distributed learning paradigm that enables a large number of resource-limited nodes to collaboratively train a model without data sharing. The non-independent-and-identically-distributed (non-i.i.d.) data samples invoke discrepancies between the global and local objectives, making the FL model slow to converge. In this paper, we proposed Optimal Aggregation algorithm for better aggregation, which finds out the optimal subset of local updates of participating nodes in each global round, by identifying and excluding the adverse local updates via checking the relationship between the local gradient and the global gradient. Then, we proposed a P robabilistic N ode S election framework ( FedPNS ) to dynamically change the probability for each node to be selected based on the output of Optimal Aggregation . FedPNS can preferentially select nodes that propel faster model convergence. The convergence rate improvement of FedPNS over the commonly adopted Federated Averaging ( FedAvg ) algorithm is analyzed theoretically. Experimental results demonstrate the effectiveness of FedPNS in accelerating the FL convergence rate, as compared to FedAvg with random node selection.
More
Translated text
Key words
Federated learning,mobile edge computing,fast convergence,node selection
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined