A Unified Perspective on Deep Equilibrium Finding

arxiv(2022)

引用 0|浏览12
暂无评分
摘要
Extensive-form games provide a versatile framework for modeling interactions of multiple agents subjected to imperfect observations and stochastic events. In recent years, two paradigms, policy space response oracles (PSRO) and counterfactual regret minimization (CFR), showed that extensive-form games may indeed be solved efficiently. Both of them are capable of leveraging deep neural networks to tackle the scalability issues inherent to extensive-form games and we refer to them as deep equilibrium-finding algorithms. Even though PSRO and CFR share some similarities, they are often regarded as distinct and the answer to the question of which is superior to the other remains ambiguous. Instead of answering this question directly, in this work we propose a unified perspective on deep equilibrium finding that generalizes both PSRO and CFR. Our four main contributions include: i) a novel response oracle (RO) which computes Q values as well as reaching probability values and baseline values; ii) two transform modules -- a pre-transform and a post-transform -- represented by neural networks transforming the outputs of RO to a latent additive space (LAS), and then the LAS to action probabilities for execution; iii) two average oracles -- local average oracle (LAO) and global average oracle (GAO) -- where LAO operates on LAS and GAO is used for evaluation only; and iv) a novel method inspired by fictitious play that optimizes the transform modules and average oracles, and automatically selects the optimal combination of components of the two frameworks. Experiments on Leduc poker game demonstrate that our approach can outperform both frameworks.
更多
查看译文
关键词
deep equilibrium finding,unified perspective
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要