Towards Principled Representation Learning from Videos for Reinforcement Learning
ICLR 2024(2024)
摘要
We study pre-training representations for decision-making using video data,
which is abundantly available for tasks such as game agents and software
testing. Even though significant empirical advances have been made on this
problem, a theoretical understanding remains absent. We initiate the
theoretical investigation into principled approaches for representation
learning and focus on learning the latent state representations of the
underlying MDP using video data. We study two types of settings: one where
there is iid noise in the observation, and a more challenging setting where
there is also the presence of exogenous noise, which is non-iid noise that is
temporally correlated, such as the motion of people or cars in the background.
We study three commonly used approaches: autoencoding, temporal contrastive
learning, and forward modeling. We prove upper bounds for temporal contrastive
learning and forward modeling in the presence of only iid noise. We show that
these approaches can learn the latent state and use it to do efficient
downstream RL with polynomial sample complexity. When exogenous noise is also
present, we establish a lower bound result showing that the sample complexity
of learning from video data can be exponentially worse than learning from
action-labeled trajectory data. This partially explains why reinforcement
learning with video pre-training is hard. We evaluate these representational
learning methods in two visual domains, yielding results that are consistent
with our theoretical findings.
更多查看译文
关键词
Reinforcement Learning,Representation Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要