Multi-objectivization of reinforcement learning problems by reward shaping

Neural Networks(2014)

引用 82|浏览73
暂无评分
摘要
Multi-objectivization is the process of transforming a single objective problem into a multi-objective problem. Research in evolutionary optimization has demonstrated that the addition of objectives that are correlated with the original objective can make the resulting problem easier to solve compared to the original single-objective problem. In this paper we investigate the multi-objectivization of reinforcement learning problems. We propose a novel method for the multi-objectivization of Markov Decision problems through the use of multiple reward shaping functions. Reward shaping is a technique to speed up reinforcement learning by including additional heuristic knowledge in the reward signal. The resulting composite reward signal is expected to be more informative during learning, leading the learner to identify good actions more quickly. Good reward shaping functions are by definition correlated with the target value function for the base reward signal, and we show in this paper that adding several correlated signals can help to solve the basic single objective problem faster and better. We prove that the total ordering of solutions, and by consequence the optimality of solutions, is preserved in this process, and empirically demonstrate the usefulness of this approach on two reinforcement learning tasks: a pathfinding problem and the Mario domain.
更多
查看译文
关键词
Markov processes,evolutionary computation,learning (artificial intelligence),optimisation,Mario domain,Markov decision problems,composite reward signal,evolutionary optimization,heuristic knowledge,multiobjective problem,multiobjectivization,multiple reward shaping functions,pathfinding problem,reinforcement learning problems,single objective problem,target value function
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要