Learning in Mean Field Games: A Survey
arxiv(2022)
摘要
Non-cooperative and cooperative games with a very large number of players
have many applications but remain generally intractable when the number of
players increases. Introduced by Lasry and Lions, and Huang, Caines and
Malhamé, Mean Field Games (MFGs) rely on a mean-field approximation to allow
the number of players to grow to infinity. Traditional methods for solving
these games generally rely on solving partial or stochastic differential
equations with a full knowledge of the model. Recently, Reinforcement Learning
(RL) has appeared promising to solve complex problems at scale. The combination
of RL and MFGs is promising to solve games at a very large scale both in terms
of population size and environment complexity. In this survey, we review the
quickly growing recent literature on RL methods to learn equilibria and social
optima in MFGs. We first identify the most common settings (static, stationary,
and evolutive) of MFGs. We then present a general framework for classical
iterative methods (based on best-response computation or policy evaluation) to
solve MFGs in an exact way. Building on these algorithms and the connection
with Markov Decision Processes, we explain how RL can be used to learn MFG
solutions in a model-free way. Last, we present numerical illustrations on a
benchmark problem, and conclude with some perspectives.
更多查看译文
关键词
mean field games,learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要