Convergence Analysis of Difference-of-Convex Algorithm with Subanalytic Data

J. Optimization Theory and Applications(2018)

引用 44|浏览8
暂无评分
摘要
Difference-of-Convex programming and related algorithms, which constitute the backbone of nonconvex programming and global optimization, were introduced in 1985 by Pham Dinh Tao and have been extensively developed by Le Thi Hoai An and Pham Dinh Tao since 1994 to become now classic and increasingly popular. That algorithm is a descent method without linesearch and every limit point of its generated sequence is a critical point of the related Difference-of-Convex program. Determining its convergence rate is a challenging problem. Its knowledge is crucial from both theoretical and practical points of view. In this work, we treat this problem for the class of Difference-of-Convex programs with subanalytic data by using the nonsmooth form of the Lojasiewicz inequality. We have successfully proved that the whole sequence is convergent, if it is bounded, provided that the objective function is subanalytic continuous on its domain and one of the two Difference-of-Convex components is differentiable with locally Lipschitz derivative. We also established a result on the convergence rate, which depended on the Lojasiewicz exponent of the objective function. Finally, for both classes of trust-region subproblems and nonconvex quadratic programs, we showed that the Lojasiewicz exponent was one half, and thereby, our proposed algorithms applied to these Difference-of-Convex programs were Root-linearly convergent.
更多
查看译文
关键词
Difference-of-Convex programming,Difference-of-Convex algorithm,Subdifferential,Subanalyticity,Lojasiewicz exponent,Convergence rate,90C30,90C26,47N10
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要