A Communication-Efficient Stochastic Gradient Descent Algorithm for Distributed Nonconvex Optimization
2024 IEEE 18th International Conference on Control & Automation (ICCA)(2024)
Abstract
This paper studies distributed nonconvex optimization problems with
stochastic gradients for a multi-agent system, in which each agent aims to
minimize the sum of all agents' cost functions by using local compressed
information exchange. We propose a distributed stochastic gradient descent
(SGD) algorithm, suitable for a general class of compressors. We show that the
proposed algorithm achieves the linear speedup convergence rate
𝒪(1/√(nT)) for smooth nonconvex functions, where T and n
are the number of iterations and agents, respectively. If the global cost
function additionally satisfies the Polyak–Łojasiewicz condition, the
proposed algorithm can linearly converge to a neighborhood of the global
optimum, regardless of whether the stochastic gradient is unbiased or not.
Numerical experiments are carried out to verify the efficiency of our
algorithm.
MoreTranslated text
Key words
Distributed nonconvex optimization,linear speedup,compressed communication,stochastic gradient
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined