The Generalization Gap in Offline Reinforcement Learning
CoRR(2023)
摘要
Despite recent progress in offline learning, these methods are still trained
and tested on the same environment. In this paper, we compare the
generalization abilities of widely used online and offline learning methods
such as online reinforcement learning (RL), offline RL, sequence modeling, and
behavioral cloning. Our experiments show that offline learning algorithms
perform worse on new environments than online learning ones. We also introduce
the first benchmark for evaluating generalization in offline learning,
collecting datasets of varying sizes and skill-levels from Procgen (2D video
games) and WebShop (e-commerce websites). The datasets contain trajectories for
a limited number of game levels or natural language instructions and at test
time, the agent has to generalize to new levels or instructions. Our
experiments reveal that existing offline learning algorithms struggle to match
the performance of online RL on both train and test environments. Behavioral
cloning is a strong baseline, outperforming state-of-the-art offline RL and
sequence modeling approaches when trained on data from multiple environments
and tested on new ones. Finally, we find that increasing the diversity of the
data, rather than its size, improves performance on new environments for all
offline learning algorithms. Our study demonstrates the limited generalization
of current offline learning algorithms highlighting the need for more research
in this area.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要