Adaptive Variant of the Frank-Wolfe Algorithm for Convex Optimization Problems.

G. V. Aivazian,Fedor S. Stonyakin,D. A. Pasechnyk,Mohammad S. Alkousa, A. M. Raigorodsky, I. V. Baran

Program. Comput. Softw.(2023)

引用 0|浏览1
暂无评分
摘要
In this paper, we investigate a variant of the Frank-Wolfe method for convex optimization problems with the adaptive selection of the step parameter corresponding to information about the smoothness of the objective function (the Lipschitz constant of the gradient). Theoretical estimates of the quality of the approximate solution provided by the method using adaptively selected parameters L-k are presented. For a class of problems on a convex feasible set with a convex objective function, the guaranteed convergence rate of the proposed method is sublinear. A special subclass of these problems (an objective function with the gradient dominance condition) is considered and the convergence rate of the method using adaptively selected parameters L-k is estimated. An important feature of the result obtained is the elaboration of the case where it is possible to guarantee, after the completion of the iteration, at least double reduction in the residual of the function. At the same time, the use of adaptively selected parameters in theoretical estimates makes the method applicable to both smooth and non-smooth problems, provided that the iteration termination criterion is met. For smooth problems, it can be proved that the theoretical estimates of the method are reliably optimal up to multiplication by a constant factor. Computational experiments are carried out and a comparison with two other algorithms is made to demonstrate the efficiency of the algorithm on a number of both smooth and non-smooth problems.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要