Off-OAB: Off-Policy Policy Gradient Method with Optimal Action-Dependent Baseline
arxiv(2024)
摘要
Policy-based methods have achieved remarkable success in solving challenging
reinforcement learning problems. Among these methods, off-policy policy
gradient methods are particularly important due to that they can benefit from
off-policy data. However, these methods suffer from the high variance of the
off-policy policy gradient (OPPG) estimator, which results in poor sample
efficiency during training. In this paper, we propose an off-policy policy
gradient method with the optimal action-dependent baseline (Off-OAB) to
mitigate this variance issue. Specifically, this baseline maintains the OPPG
estimator's unbiasedness while theoretically minimizing its variance. To
enhance practical computational efficiency, we design an approximated version
of this optimal baseline. Utilizing this approximation, our method (Off-OAB)
aims to decrease the OPPG estimator's variance during policy optimization. We
evaluate the proposed Off-OAB method on six representative tasks from OpenAI
Gym and MuJoCo, where it demonstrably surpasses state-of-the-art methods on the
majority of these tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要