Learning Future Representation with Synthetic Observations for Sample-efficient Reinforcement Learning
CoRR(2024)
摘要
In visual Reinforcement Learning (RL), upstream representation learning
largely determines the effect of downstream policy learning. Employing
auxiliary tasks allows the agent to enhance visual representation in a targeted
manner, thereby improving the sample efficiency and performance of downstream
RL. Prior advanced auxiliary tasks all focus on how to extract as much
information as possible from limited experience (including observations,
actions, and rewards) through their different auxiliary objectives, whereas in
this article, we first start from another perspective: auxiliary training data.
We try to improve auxiliary representation learning for RL by enriching
auxiliary training data, proposing Learning Future
representation with Synthetic observations (LFS), a novel
self-supervised RL approach. Specifically, we propose a training-free method to
synthesize observations that may contain future information, as well as a data
selection approach to eliminate unqualified synthetic noise. The remaining
synthetic observations and real observations then serve as the auxiliary data
to achieve a clustering-based temporal association task for representation
learning. LFS allows the agent to access and learn observations that have not
yet appeared in advance, so as to quickly understand and exploit them when they
occur later. In addition, LFS does not rely on rewards or actions, which means
it has a wider scope of application (e.g., learning from video) than recent
advanced auxiliary tasks. Extensive experiments demonstrate that our LFS
exhibits state-of-the-art RL sample efficiency on challenging continuous
control and enables advanced visual pre-training based on action-free video
demonstrations.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要