Global Convergence of High-Order Regularization Methods with Sums-of-Squares Taylor Models
arxiv(2024)
摘要
High-order tensor methods that employ Taylor-based local models (of degree
p≥ 3) within adaptive regularization frameworks have been recently proposed
for both convex and nonconvex optimization problems. They have been shown to
have superior, and even optimal, worst-case global convergence rates and local
rates compared to Newton's method. Finding rigorous and efficient techniques
for minimizing the Taylor polynomial sub-problems remains a challenging aspect
for these algorithms. Ahmadi et al. recently introduced a tensor method based
on sum-of-squares (SoS) reformulations, so that each Taylor polynomial
sub-problem in their approach can be tractably minimized using semidefinite
programming (SDP); however, the global convergence and complexity of their
method have not been addressed for general nonconvex problems. This paper
introduces an algorithmic framework that combines the Sum of Squares (SoS)
Taylor model with adaptive regularization techniques for nonconvex smooth
optimization problems. Each iteration minimizes an SoS Taylor model, offering a
polynomial cost per iteration. For general nonconvex functions, the worst-case
evaluation complexity bound is 𝒪(ϵ^-2), while for strongly
convex functions, an improved evaluation complexity bound of
𝒪(ϵ^-1/p) is established. To the best of our
knowledge, this is the first global rate analysis for an adaptive
regularization algorithm with a tractable high-order sub-problem in nonconvex
smooth optimization, opening the way for further improvements.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要