Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method.

SIAM J. Optim.(2023)

Cited 0|Views16
No score
Abstract
We propose a double-loop inexact accelerated proximal gradient (APG) method for a strongly convex composite optimization problem with two smooth components of different smoothness constants and computational costs. Compared to APG, the inexact APG can reduce the time complexity for finding a near-stationary point when one smooth component has higher computational cost but a smaller smoothness constant than the other. The strongly convex com-posite optimization problem with this property arises from subproblems of a regularized augmented Lagrangian method for affine-constrained composite convex optimization and also from the smooth approximation for bilinear saddle-point structured nonsmooth convex optimization. We show that the inexact APG method can be applied to these two problems and reduce the time complexity for finding a near-stationary solution. Numerical experiments demonstrate significantly higher efficiency of our methods over an optimal primal-dual first-order method by Hamedani and Aybat [SIAM J. Optim., 31 (2021), pp. 1299--1329] and the gradient sliding method by Lan, Ouyang, and Zhou [arXiv2101.00143, 2021].
More
Translated text
Key words
optimization problems,gradient,complexity
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined