Provable Policy Gradient Methods for Average-Reward Markov Potential Games
arxiv(2024)
摘要
We study Markov potential games under the infinite horizon average reward
criterion. Most previous studies have been for discounted rewards. We prove
that both algorithms based on independent policy gradient and independent
natural policy gradient converge globally to a Nash equilibrium for the average
reward criterion. To set the stage for gradient-based methods, we first
establish that the average reward is a smooth function of policies and provide
sensitivity bounds for the differential value functions, under certain
conditions on ergodicity and the second largest eigenvalue of the underlying
Markov decision process (MDP). We prove that three algorithms, policy gradient,
proximal-Q, and natural policy gradient (NPG), converge to an ϵ-Nash
equilibrium with time complexity O(1/ϵ^2), given a
gradient/differential Q function oracle. When policy gradients have to be
estimated, we propose an algorithm with
Õ(1/min_s,aπ(a|s)δ) sample complexity to achieve
δ approximation error w.r.t the ℓ_2 norm. Equipped with the
estimator, we derive the first sample complexity analysis for a policy gradient
ascent algorithm, featuring a sample complexity of Õ(1/ϵ^5).
Simulation studies are presented.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要