Harnessing Density Ratios for Online Reinforcement Learning
CoRR(2024)
摘要
The theories of offline and online reinforcement learning, despite having
evolved in parallel, have begun to show signs of the possibility for a
unification, with algorithms and analysis techniques for one setting often
having natural counterparts in the other. However, the notion of density ratio
modeling, an emerging paradigm in offline RL, has been largely absent from
online RL, perhaps for good reason: the very existence and boundedness of
density ratios relies on access to an exploratory dataset with good coverage,
but the core challenge in online RL is to collect such a dataset without having
one to start. In this work we show – perhaps surprisingly – that density
ratio-based algorithms have online counterparts. Assuming only the existence of
an exploratory distribution with good coverage, a structural condition known as
coverability (Xie et al., 2023), we give a new algorithm (GLOW) that uses
density ratio realizability and value function realizability to perform
sample-efficient online exploration. GLOW addresses unbounded density ratios
via careful use of truncation, and combines this with optimism to guide
exploration. GLOW is computationally inefficient; we complement it with a more
efficient counterpart, HyGLOW, for the Hybrid RL setting (Song et al., 2022)
wherein online RL is augmented with additional offline data. HyGLOW is derived
as a special case of a more general meta-algorithm that provides a provable
black-box reduction from hybrid RL to offline RL, which may be of independent
interest.
更多查看译文
关键词
reinforcement learning theory,online RL,offline RL,hybrid RL,density ratio,marginalized importance weight,weight function,general function approximation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要