A Proximal-Gradient Method for Constrained Optimization
arxiv(2024)
摘要
We present a new algorithm for solving optimization problems with objective
functions that are the sum of a smooth function and a (potentially) nonsmooth
regularization function, and nonlinear equality constraints. The algorithm may
be viewed as an extension of the well-known proximal-gradient method that is
applicable when constraints are not present. To account for nonlinear equality
constraints, we combine a decomposition procedure for computing trial steps
with an exact merit function for determining trial step acceptance. Under
common assumptions, we show that both the proximal parameter and merit function
parameter eventually remain fixed, and then prove a worst-case complexity
result for the maximum number of iterations before an iterate satisfying
approximate first-order optimality conditions for a given tolerance is
computed. Our preliminary numerical results indicate that our approach has
great promise, especially in terms of returning approximate solutions that are
structured (e.g., sparse solutions when a one-norm regularizer is used).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要