Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks.

ICML(2017)

引用 338|浏览60
暂无评分
摘要
In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterovu0027s accelerated gradient descent is optimal and achieves a precision $varepsilon u003e 0$ in time $O(sqrt{kappa_g}(1+Deltatau)ln(1/varepsilon))$, where $kappa_g$ is the condition number of the (global) function to optimize, $Delta$ is the diameter of the network, and $tau$ (resp. $1$) is the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision $varepsilon u003e 0$ in time $O(sqrt{kappa_l}(1+frac{tau}{sqrt{gamma}})ln(1/varepsilon))$, where $kappa_l$ is the condition number of the local functions and $gamma$ is the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要