Chrome Extension
WeChat Mini Program
Use on ChatGLM

On the Decentralized Stochastic Gradient Descent With Markov Chain Sampling

IEEE TRANSACTIONS ON SIGNAL PROCESSING(2023)

Cited 0|Views9
No score
Abstract
The decentralized stochastic gradient method emerges as a promising solution for solving large-scale machine learning problems. This paper studies the decentralized Markov chain gradient descent (DMGD), a variant of the decentralized stochastic gradient method, which draws random samples along the trajectory of a Markov chain. DMGD arises when obtaining independent samples is costly or impossible, excluding the use of the traditional stochastic gradient algorithms. Specifically, we consider the DMGD over a connected graph, where each node only communicates with its neighbors by sending and receiving the intermediate results. We establish both ergodic and nonergodic convergence rates of DMGD, which elucidate the critical dependencies on the topology of the graph that connects all nodes and the mixing time of the Markov chain. We further numerically verify the sample efficiency of DMGD.
More
Translated text
Key words
Markov chain sampling,gradient descent,decentralization,distributed machine learning,convergence
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined