Adaptive Accelerated Composite Minimization

arxiv(2024)

引用 0|浏览3
暂无评分
摘要
The choice of the stepsize in first-order convex optimization is typically based on the smoothness constant and plays a crucial role in the performance of algorithms. Recently, there has been a resurgent interest in introducing adaptive stepsizes that do not explicitly depend on smooth constant. In this paper, we propose a novel adaptive stepsize rule based on function evaluations (i.e., zero-order information) that enjoys provable convergence guarantees for both accelerated and non-accelerated gradient descent. We further discuss the similarities and differences between the proposed stepsize regimes and the existing stepsize rules (including Polyak and Armijo). Numerically, we benchmark the performance of our proposed algorithms with the state-of-the-art literature in three different classes of smooth minimization (logistic regression, quadratic programming, log-sum-exponential, and approximate semidefinite programming), composite minimization (ℓ_1 constrained and regularized problems), and non-convex minimization (cubic problem).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要