Bregman Proximal Method for Efficient Communications under Similarity

arXiv (Cornell University)(2023)

引用 0|浏览1
暂无评分
摘要
We propose a novel distributed method for monotone variational inequalities and convex-concave saddle point problems arising in various machine learning applications such as game theory and adversarial training. By exploiting \textit{similarity} our algorithm overcomes communication bottleneck which is a major issue in distributed optimization. The proposed algorithm enjoys optimal communication complexity of $\delta/\epsilon$, where $\epsilon$ measures the non-optimality gap function, and $\delta$ is a parameter of similarity. All the existing distributed algorithms achieving this bound essentially utilize the Euclidean setup. In contrast to them, our algorithm is built upon Bregman proximal maps and it is compatible with an arbitrary Bregman divergence. Thanks to this, it has more flexibility to fit the problem geometry than algorithms with the Euclidean setup. Thereby the proposed method bridges the gap between the Euclidean and non-Euclidean setting. By using the restart technique, we extend our algorithm to variational inequalities with $\mu$-strongly monotone operator, resulting in optimal communication complexity of $\delta/\mu$ (up to a logarithmic factor). Our theoretical results are confirmed by numerical experiments on a stochastic matrix game.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要