谷歌浏览器插件
订阅小程序
在清言上使用

Reducing Conservatism In Robust Optimization

INFORMS JOURNAL ON COMPUTING(2020)

引用 34|浏览7
暂无评分
摘要
Although robust optimization is a powerful technique in dealing with uncertainty in optimization, its solutions can be too conservative. More specifically, it can lead to an objective value much worse than the nominal solution or even to infeasibility of the robust problem. In practice, this can lead to robust solutions being disregarded in favor of the nominal solution. This conservatism is caused by both the constraint-wise approach of robust optimization and its core assumption that all constraints are hard for all scenarios in the uncertainty set. This paper seeks to alleviate this conservatism by proposing an alternative robust formulation that condenses all uncertainty into a single constraint, binding the worst-case expected violation in the original constraints from above. Using recent results in distributionally robust optimization, the proposed formulation is shown to be tractable for both right- and left-hand side uncertainty. A computational study is performed with problems from the NETLIB library. For some problems, the percentage of uncertainty is magnified fourfold in terms of increase in objective value of the standard robust solution compared with the nominal solution, whereas we find solutions that safeguard against over half the violation at only a 10th of the cost in objective value. For problems with an infeasible standard robust counterpart, the suggested approach is still applicable and finds both solutions that safeguard against most of the uncertainty at a low price in terms of objective value.
更多
查看译文
关键词
robust optimization, non-constraint-wise uncertainty, ambiguity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要