Deep Reinforcement Learning in Continuous Multi Agent Environments

Ang Li,Michael Kuchnik, Yixin Luo,Rohan Sawhney

semanticscholar(2017)

引用 1|浏览0
暂无评分
摘要
Many of the recent successes of deep reinforcement learning have been in single agent domains with discrete low dimensional action spaces. However, several important and interesting problems in communication, robotics, control and gaming have continuous high dimensional action spaces that involve interactions between multiple agents. Naively discretizing action spaces has many limitations, most notably the curse of dimensionality. On the other hand, single agent techniques struggle in non stationary multiagent environments as they do not take the actions of other agents into account while modeling an agent’s actions and estimate of futures return. In this project report, we apply Deep Q Network (DQN) [4], Deep Deterministic Policy Gradient (DDPG) [2] and Multi Agent Deep Deterministic Policy Gradient (MADDPG) [3] to continuous multi-agent domains with both competitive and cooperative scenarios. In particular, we test the effectiveness of these methods in the classic predator-prey game [Figure 1], where slower agents chase faster adversaries.1
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要