Robustly Stable Accelerated Momentum Methods With A Near-Optimal L2 Gain and $H_\infty$ Performance

arXiv (Cornell University)(2023)

引用 0|浏览3
暂无评分
摘要
We consider the problem of minimizing a strongly convex smooth function where the gradients are subject to additive worst-case deterministic errors that are square-summable. We study the trade-offs between the convergence rate and robustness to gradient errors when designing the parameters of a first-order algorithm. We focus on a general class of momentum methods (GMM) with constant stepsize and momentum parameters which can recover gradient descent, Nesterov's accelerated gradient, the heavy-ball and the triple momentum methods as special cases. We measure the robustness of an algorithm in terms of the cumulative suboptimality over the iterations divided by the $\ell_2$ norm of the gradient errors, which can be interpreted as the minimal (induced) $\ell_2$ gain of a transformed dynamical system that represents the GMM iterations where the input is the gradient error sequence and the output is a weighted distance to the optimum. For quadratic objectives, we compute the induced $\ell_2$ gain explicitly leveraging its connections to the $H_\infty$ norm of the dynamical system corresponding to GMM and construct worst-case gradient error sequences by a closed-form formula. We also study the stability of GMM with respect to multiplicative noise in various settings by characterizing the structured real and stability radius of the GMM system through their connections to the $H_\infty$ norm. This allows us to compare GD, HB, NAG methods in terms of robustness, and argue that HB is not as robust as NAG despite being the fastest...
更多
查看译文
关键词
stable accelerated momentum methods,near-optimal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要