Model-Free Neural Counterfactual Regret Minimization With Bootstrap Learning

arxiv(2023)

引用 4|浏览29
暂无评分
摘要
Counterfactual regret minimization (CFR) has achieved many fascinating results in solving large-scale imperfect information games (IIGs). Neural network approximation CFR (neural CFR) is one of the promising techniques that can reduce computation and memory consumption by generalizing decision information between similar states. Current neural CFR algorithms have to approximate cumulative regrets. However, efficient and accurate approximation in a large-scale IIG is still a tough challenge. In this article, a new CFR variant, recursive CFR (ReCFR), is proposed. In ReCFR, recursive substitute values (RSVs) are learned and used to replace cumulative regrets. It is proven that ReCFR can converge to a Nash equilibrium at a rate of $O({1}/{\sqrt{T}})$ . Based on ReCFR, a new model-free neural CFR with bootstrap learning, neural ReCFR-B, is proposed. Due to the recursive and noncumulative nature of RSVs, neural ReCFR-B has lower variance training targets than other neural CFRs. Experimental results show that neural ReCFR-B is competitive with the state-of-the-art neural CFR algorithms at a much lower training cost.
更多
查看译文
关键词
Counterfactual regret minimization (CFR), game theory, imperfect information games (IIG), neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要