First-order algorithms for robust optimization problems via convex-concave saddle-point Lagrangian reformulation

arxiv(2021)

引用 0|浏览0
暂无评分
摘要
Robust optimization (RO) is one of the key paradigms for solving optimization problems affected by uncertainty. Two principal approaches for RO, the robust counterpart method and the adversarial approach, potentially lead to excessively large optimization problems. For that reason, first order approaches, based on online-convex-optimization, have been proposed (Ben-Tal et al. (2015), Kilinc-Karzan and Ho-Nguyen (2018)) as alternatives for the case of large-scale problems. However, these methods are either stochastic in nature or involve a binary search for the optimal value. We propose deterministic first-order algorithms based on a saddle-point Lagrangian reformulation that avoids both of these issues. Our approach recovers the other approaches' O(1/epsilon^2) convergence rate in the general case, and offers an improved O(1/epsilon) rate for problems with constraints which are affine both in the decision and in the uncertainty. Experiment involving robust quadratic optimization demonstrates the numerical benefits of our approach.
更多
查看译文
关键词
robust optimization problems,algorithms,first-order,convex-concave,saddle-point
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要