Learning Optimal Strategies for Temporal Tasks in Stochastic Games

arXiv (Cornell University)(2022)

引用 0|浏览5
暂无评分
摘要
Synthesis from linear temporal logic (LTL) specifications provides assured controllers for autonomous systems operating in stochastic and potentially adversarial environments. Automatic synthesis tools, however, require a model of the environment to construct controllers. In this work, we introduce a model-free reinforcement learning (RL) approach that derives controllers from given LTL specifications even when the environment is completely unknown. We model the problem of satisfying the LTL specifications as a stochastic game (SG) between the controller and the adversarial environment; we then learn optimal controller strategies that maximize the probability of satisfying the LTL specifications against the worst-case environment behavior. We first construct a product game using the deterministic parity automaton (DPA) translated from the given LTL specification. By deriving distinct rewards and discount factors from the acceptance condition of the DPA, we reduce the maximization of the worst-case probability of satisfying the LTL specification into the maximization of a discounted reward objective in the product game; this allows for the use of model-free RL algorithms to learn an optimal controller strategy. To deal with the common scalability problems when the number of colors defining the acceptance condition of the DPA is large, we propose a lazy color generation method where distinct rewards and discount factors are utilized only when needed, and an approximate method where the controller eventually focuses on only one color. In several case studies, we show that our approach is scalable to a wide range of LTL formulas, significantly outperforming existing methods for learning controllers from LTL specifications in SGs.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要