Decentralized Federated Learning with Asynchronous Parameter Sharing for Large-scale IoT Networks
IEEE Internet of Things Journal(2024)
摘要
Federated learning (FL) enables wireless terminals to collaboratively learn a
shared parameter model while keeping all the training data on devices per se.
Parameter sharing consists of synchronous and asynchronous ways: the former
transmits parameters as blocks or frames and waits until all transmissions
finish, whereas the latter provides messages about the status of pending and
failed parameter transmission requests. Whatever synchronous or asynchronous
parameter sharing is applied, the learning model shall adapt to distinct
network architectures as an improper learning model will deteriorate learning
performance and, even worse, lead to model divergence for the asynchronous
transmission in resource-limited large-scale Internet-of-Things (IoT) networks.
This paper proposes a decentralized learning model and develops an asynchronous
parameter-sharing algorithm for resource-limited distributed IoT networks. This
decentralized learning model approaches a convex function as the number of
nodes increases, and its learning process converges to a global stationary
point with a higher probability than the centralized FL model. Moreover, by
jointly accounting for the convergence bound of federated learning and the
transmission delay of wireless communications, we develop a node scheduling and
bandwidth allocation algorithm to minimize the transmission delay. Extensive
simulation results corroborate the effectiveness of the distributed algorithm
in terms of fast learning model convergence and low transmission delay.
更多查看译文
关键词
Asynchronous communications,distributed algorithm,federated learning,large-scale IoT networks,transmission delay
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要