Information Relaxation Bounds for Infinite Horizon Markov Decision Processes

Periodicals(2017)

引用 15|浏览57
暂无评分
摘要
AbstractWe consider the information relaxation approach for calculating performance bounds for stochastic dynamic programs (DPs), following Brown et al. [Brown DB, Smith JE, Sun P (2010) Information relaxations and duality in stochastic dynamic programs. Oper. Res. 58(4, Part 1):785–801]. This approach generates performance bounds by solving problems with relaxed nonanticipativity constraints and a penalty that punishes violations of these constraints. In this paper, we study infinite horizon DPs with discounted costs and consider applying information relaxations to reformulations of the DP. These reformulations use different state transition functions and correct for the change in state transition probabilities by multiplying by likelihood ratio factors. These reformulations can greatly simplify solutions of the information relaxations, both in leading to finite horizon subproblems and by reducing the number of states that need to be considered in these subproblems. We show that any reformulation leads to a lower bound on the optimal cost of the DP when used with an information relaxation and a penalty built from a broad class of approximate value functions. We refer to this class of approximate value functions as subsolutions, and this includes approximate value functions based on Lagrangian relaxations as well as those based on approximate linear programs. We show that the information relaxation approach, in theory, recovers a tight lower bound using any reformulation and is guaranteed to improve on the lower bounds from subsolutions. Finally, we apply information relaxations to an inventory control application with an autoregressive demand process, as well as dynamic service allocation in a multiclass queue. In our examples, we find that the information relaxation lower bounds are easy to calculate and are very close to the expected cost using simple heuristic policies, thereby showing that these heuristic policies are nearly optimal.
更多
查看译文
关键词
infinite horizon dynamic programs,information relaxations,Lagrangian relaxations,inventory control,multiclass queues
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要