Risk-Sensitive RL with Optimized Certainty Equivalents via Reduction to Standard RL
arxiv(2024)
摘要
We study Risk-Sensitive Reinforcement Learning (RSRL) with the Optimized
Certainty Equivalent (OCE) risk, which generalizes Conditional Value-at-risk
(CVaR), entropic risk and Markowitz's mean-variance. Using an augmented Markov
Decision Process (MDP), we propose two general meta-algorithms via reductions
to standard RL: one based on optimistic algorithms and another based on policy
optimization. Our optimistic meta-algorithm generalizes almost all prior RSRL
theory with entropic risk or CVaR. Under discrete rewards, our optimistic
theory also certifies the first RSRL regret bounds for MDPs with bounded
coverability, e.g., exogenous block MDPs. Under discrete rewards, our policy
optimization meta-algorithm enjoys both global convergence and local
improvement guarantees in a novel metric that lower bounds the true OCE risk.
Finally, we instantiate our framework with PPO, construct an MDP, and show that
it learns the optimal risk-sensitive policy while prior algorithms provably
fail.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要