Reframing Offline Reinforcement Learning as a Regression Problem
CoRR(2024)
摘要
The study proposes the reformulation of offline reinforcement learning as a
regression problem that can be solved with decision trees. Aiming to predict
actions based on input states, return-to-go (RTG), and timestep information, we
observe that with gradient-boosted trees, the agent training and inference are
very fast, the former taking less than a minute. Despite the simplification
inherent in this reformulated problem, our agent demonstrates performance that
is at least on par with established methods. This assertion is validated by
testing it across standard datasets associated with D4RL Gym-MuJoCo tasks. We
further discuss the agent's ability to generalize by testing it on two extreme
cases, how it learns to model the return distributions effectively even with
highly skewed expert datasets, and how it exhibits robust performance in
scenarios with sparse/delayed rewards.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要