Universal Value Function Approximators.

ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37(2015)

引用 1163|浏览377
暂无评分
摘要
Value functions are a core component of reinforcement learning systems. The main idea is to to construct a single function approximator V ( s ; θ) that estimates the long-term reward from any state s, using parameters θ. In this paper we introduce universal value function approximators (UVFAs) V ( s, g ; θ) that generalise not just over states s but also over goals g . We develop an efficient technique for supervised learning of UVFAs, by factoring observed values into separate embedding vectors for state and goal, and then learning a mapping from s and g to these factored embedding vectors. We show how this technique may be incorporated into a reinforcement learning algorithm that updates the UVFA solely from observed rewards. Finally, we demonstrate that a UVFA can successfully generalise to previously unseen goals.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要