Convergence analysis of a norm minimization-based convex vector optimization algorithm

arXiv (Cornell University)(2023)

Cited 0|Views3
No score
Abstract
In this work, we propose an outer approximation algorithm for solving bounded convex vector optimization problems (CVOPs). The scalarization model solved iteratively within the algorithm is a modification of the norm-minimizing scalarization proposed in Ararat et al. (2022). For a predetermined tolerance $\epsilon>0$, we prove that the algorithm terminates after finitely many iterations, and it returns a polyhedral outer approximation to the upper image of the CVOP such that the Hausdorff distance between the two is less than $\epsilon$. We show that for an arbitrary norm used in the scalarization models, the approximation error after $k$ iterations decreases by the order of $\mathcal{O}(k^{{1}/{(1-q)}})$, where $q$ is the dimension of the objective space. An improved convergence rate of $\mathcal{O}(k^{{2}/{(1-q)}})$ is proved for the special case of using the Euclidean norm.
More
Translated text
Key words
convex vector
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined