A Quasi-Newton Subspace Trust Region Algorithm for nonmonotone variational inequalities in adversarial learning over box constraints

Zicheng Qiu,Jie Jiang,Xiaojun Chen

arxiv(2023)

Cited 0|Views27
No score
Abstract
The first-order optimality condition of convexly constrained nonconvex nonconcave min-max optimization problems with box constraints formulates a nonmonotone variational inequality (VI), which is equivalent to a system of nonsmooth equations. In this paper, we propose a quasi-Newton subspace trust region (QNSTR) algorithm for the least squares problems defined by the smoothing approximation of nonsmooth equations. Based on the structure of the nonmonotone VI, we use an adaptive quasi-Newton formula to approximate the Hessian matrix and solve a low-dimensional strongly convex quadratic program with ellipse constraints in a subspace at each step of the QNSTR algorithm efficiently. We prove the global convergence of the QNSTR algorithm to an ϵ-first-order stationary point of the min-max optimization problem. Moreover, we present numerical results based on the QNSTR algorithm with different subspaces for a mixed generative adversarial networks in eye image segmentation using real data to show the efficiency and effectiveness of the QNSTR algorithm for solving large-scale min-max optimization problems.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined