A Single Online Agent Can Efficiently Learn Mean Field Games
arxiv(2024)
摘要
Mean field games (MFGs) are a promising framework for modeling the behavior
of large-population systems. However, solving MFGs can be challenging due to
the coupling of forward population evolution and backward agent dynamics.
Typically, obtaining mean field Nash equilibria (MFNE) involves an iterative
approach where the forward and backward processes are solved alternately, known
as fixed-point iteration (FPI). This method requires fully observed population
propagation and agent dynamics over the entire spatial domain, which could be
impractical in some real-world scenarios. To overcome this limitation, this
paper introduces a novel online single-agent model-free learning scheme, which
enables a single agent to learn MFNE using online samples, without prior
knowledge of the state-action space, reward function, or transition dynamics.
Specifically, the agent updates its policy through the value function (Q),
while simultaneously evaluating the mean field state (M), using the same batch
of observations. We develop two variants of this learning scheme: off-policy
and on-policy QM iteration. We prove that they efficiently approximate FPI, and
a sample complexity guarantee is provided. The efficacy of our methods is
confirmed by numerical experiments.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要