Guaranteed Trust Region Optimization via Two-Phase KL Penalization
CoRR(2023)
摘要
On-policy reinforcement learning (RL) has become a popular framework for
solving sequential decision problems due to its computational efficiency and
theoretical simplicity. Some on-policy methods guarantee every policy update is
constrained to a trust region relative to the prior policy to ensure training
stability. These methods often require computationally intensive non-linear
optimization or require a particular form of action distribution. In this work,
we show that applying KL penalization alone is nearly sufficient to enforce
such trust regions. Then, we show that introducing a "fixup" phase is
sufficient to guarantee a trust region is enforced on every policy update while
adding fewer than 5% additional gradient steps in practice. The resulting
algorithm, which we call FixPO, is able to train a variety of policy
architectures and action spaces, is easy to implement, and produces results
competitive with other trust region methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要