Zero-Shot Reinforcement Learning from Low Quality Data
arxiv(2023)
摘要
Zero-shot reinforcement learning (RL) promises to provide agents that can
perform any task in an environment after an offline, reward-free pre-training
phase. Methods leveraging successor measures and successor features have shown
strong performance in this setting, but require access to large heterogenous
datasets for pre-training which cannot be expected for most real problems.
Here, we explore how the performance of zero-shot RL methods degrades when
trained on small homogeneous datasets, and propose fixes inspired by
conservatism, a well-established feature of performant single-task offline RL
algorithms. We evaluate our proposals across various datasets, domains and
tasks, and show that conservative zero-shot RL algorithms outperform their
non-conservative counterparts on low quality datasets, and perform no worse on
high quality datasets. Somewhat surprisingly, our proposals also outperform
baselines that get to see the task during training. Our code is available via
https://enjeeneer.io/projects/zero-shot-rl/.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要