More Benefits of Being Distributional: Second-Order Bounds for Reinforcement Learning
CoRR(2024)
摘要
In this paper, we prove that Distributional Reinforcement Learning (DistRL),
which learns the return distribution, can obtain second-order bounds in both
online and offline RL in general settings with function approximation.
Second-order bounds are instance-dependent bounds that scale with the variance
of return, which we prove are tighter than the previously known small-loss
bounds of distributional RL. To the best of our knowledge, our results are the
first second-order bounds for low-rank MDPs and for offline RL. When
specializing to contextual bandits (one-step RL problem), we show that a
distributional learning based optimism algorithm achieves a second-order
worst-case regret bound, and a second-order gap dependent bound,
simultaneously. We also empirically demonstrate the benefit of DistRL in
contextual bandits on real-world datasets. We highlight that our analysis with
DistRL is relatively simple, follows the general framework of optimism in the
face of uncertainty and does not require weighted regression. Our results
suggest that DistRL is a promising framework for obtaining second-order bounds
in general RL settings, thus further reinforcing the benefits of DistRL.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要