λ-models: Effective Decision-Aware Reinforcement Learning with Latent Models
arXiv (Cornell University)(2023)
Abstract
The idea of decision-aware model learning, that models should be accurate
where it matters for decision-making, has gained prominence in model-based
reinforcement learning. While promising theoretical results have been
established, the empirical performance of algorithms leveraging a
decision-aware loss has been lacking, especially in continuous control
problems. In this paper, we present a study on the necessary components for
decision-aware reinforcement learning models and we showcase design choices
that enable well-performing algorithms. To this end, we provide a theoretical
and empirical investigation into algorithmic ideas in the field. We highlight
that empirical design decisions established in the MuZero line of works, most
importantly the use of a latent model, are vital to achieving good performance
for related algorithms. Furthermore, we show that the MuZero loss function is
biased in stochastic environments and establish that this bias has practical
consequences. Building on these findings, we present an overview of which
decision-aware loss functions are best used in what empirical scenarios,
providing actionable insights to practitioners in the field.
MoreTranslated text
Key words
reinforcement
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined