Decomposing Synthesized Strategies for Reactive Multi-agent Reinforcement Learning.

Chenyang Zhu, Jinyu Zhu,Yujie Cai, Fang Wang

TASE(2023)

引用 1|浏览4
暂无评分
摘要
Multi-Agent Reinforcement Learning (MARL) has been used to solve sequential decision problems by a collection of intelligent agents interacting in a shared environment. However, the design complexity of MARL strategies increases with the complexity of the task specifications. In addition, current MARL approaches suffer from slow convergence and reward sparsity when dealing with multi-task specifications. Linear temporal logic works as one of the software engineering practices to describe non-Markovian task specifications, whose synthesized strategies can be used as a priori knowledge to train the multi-agents to interact with the environment more efficiently. In this paper, we consider multi-agents that react to each other with a high-level reactive temporal logic specification called Generalized Reactivity of rank 1 (GR(1)). We first decompose the synthesized strategy of GR(1) into a set of potential-based reward machines for individual agents. We prove that the parallel composition of the reward machines forward simulates the original reward machine, which satisfies the GR(1) specification. We then extend the Markov Decision Process (MDP) with the synchronized reward machines. A value-iteration-based approach is developed to compute the potential values of the reward machine based on the strategy structure. We also propose a decentralized Q-learning algorithm to train the multi-agents with the extended MDP. Experiments on multi-agent learning under different reactive temporal logic specifications demonstrate the effectiveness of the proposed method, showing a superior learning curve and optimal rewards.
更多
查看译文
关键词
reactive,synthesized strategies,reinforcement,learning,multi-agent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要