The Effective Horizon Explains Deep RL Performance in Stochastic Environments
arxiv(2023)
摘要
Reinforcement learning (RL) theory has largely focused on proving minimax
sample complexity bounds. These require strategic exploration algorithms that
use relatively limited function classes for representing the policy or value
function. Our goal is to explain why deep RL algorithms often perform well in
practice, despite using random exploration and much more expressive function
classes like neural networks. Our work arrives at an explanation by showing
that many stochastic MDPs can be solved by performing only a few steps of value
iteration on the random policy's Q function and then acting greedily. When this
is true, we find that it is possible to separate the exploration and learning
components of RL, making it much easier to analyze. We introduce a new RL
algorithm, SQIRL, that iteratively learns a near-optimal policy by exploring
randomly to collect rollouts and then performing a limited number of steps of
fitted-Q iteration over those rollouts. Any regression algorithm that satisfies
basic in-distribution generalization properties can be used in SQIRL to
efficiently solve common MDPs. This can explain why deep RL works, since it is
empirically established that neural networks generalize well in-distribution.
Furthermore, SQIRL explains why random exploration works well in practice. We
leverage SQIRL to derive instance-dependent sample complexity bounds for RL
that are exponential only in an "effective horizon" of lookahead and on the
complexity of the class used for function approximation. Empirically, we also
find that SQIRL performance strongly correlates with PPO and DQN performance in
a variety of stochastic environments, supporting that our theoretical analysis
is predictive of practical performance. Our code and data are available at
https://github.com/cassidylaidlaw/effective-horizon.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要