Off-policy Distributional Q(λ): Distributional RL without Importance Sampling
CoRR(2024)
Abstract
We introduce off-policy distributional Q(λ), a new addition to the
family of off-policy distributional evaluation algorithms. Off-policy
distributional Q(λ) does not apply importance sampling for off-policy
learning, which introduces intriguing interactions with signed measures. Such
unique properties distributional Q(λ) from other existing alternatives
such as distributional Retrace. We characterize the algorithmic properties of
distributional Q(λ) and validate theoretical insights with tabular
experiments. We show how distributional Q(λ)-C51, a combination of
Q(λ) with the C51 agent, exhibits promising results on deep RL
benchmarks.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined