A Simple Mixture Policy Parameterization for Improving Sample Efficiency of CVaR Optimization
arxiv(2024)
摘要
Reinforcement learning algorithms utilizing policy gradients (PG) to optimize
Conditional Value at Risk (CVaR) face significant challenges with sample
inefficiency, hindering their practical applications. This inefficiency stems
from two main facts: a focus on tail-end performance that overlooks many
sampled trajectories, and the potential of gradient vanishing when the lower
tail of the return distribution is overly flat. To address these challenges, we
propose a simple mixture policy parameterization. This method integrates a
risk-neutral policy with an adjustable policy to form a risk-averse policy. By
employing this strategy, all collected trajectories can be utilized for policy
updating, and the issue of vanishing gradients is counteracted by stimulating
higher returns through the risk-neutral component, thus lifting the tail and
preventing flatness. Our empirical study reveals that this mixture
parameterization is uniquely effective across a variety of benchmark domains.
Specifically, it excels in identifying risk-averse CVaR policies in some Mujoco
environments where the traditional CVaR-PG fails to learn a reasonable policy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要