Recursive Stochastic Games with Positive Rewards

ICALP '08 Proceedings of the 35th international colloquium on Automata, Languages and Programming, Part I(2019)

引用 14|浏览0
暂无评分
摘要
We study the complexity of a class of Markov decision processes and, more generally, stochastic games, called 1-exit Recursive Markov Decision Processes (1-RMDPs) and Simple Stochastic Games (1-RSSGs) with strictly positive rewards. These are a class of finitely presented countable-state zero-sum stochastic games, with total expected reward objective. They subsume standard finite-state MDPs and Condon's simple stochastic games and correspond to optimization and game versions of several classic stochastic models, with rewards. Such stochastic models arise naturally as models of probabilistic procedural programs with recursion, and the problems we address are motivated by the goal of analyzing the optimal/pessimal expected running time in such a setting.We give polynomial time algorithms for 1-exit Recursive Markov decision processes (1-RMDPs) with positive rewards. Specifically, we show that the exact optimal value of both maximizing and minimizing 1-RMDPs with positive rewards can be computed in polynomial time (this value may be 驴). For two-player 1-RSSGs with positive rewards, we prove a "stackless and memoryless" determinacy result, and show that deciding whether the game value is at least a given value r is in NP 驴 coNP. We also prove that a simultaneous strategy improvement algorithm converges to the value and optimal strategies for these stochastic games. We observe that 1-RSSG positive reward games are "harder" than finite-state SSGs in several senses.
更多
查看译文
关键词
stochastic model,markov decision process,polynomial time
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要