A Configurable off-Policy Evaluation with Key State-Based Bias Constraints in AI Reinforcement Learning.

SocialSec(2020)

引用 0|浏览43
暂无评分
摘要
In reinforcement learning field, off-policy evaluation(OPE),a core task to learn a new policy from existing trajectory data of real policy to evaluate, is highly important for real policy deployment before policy running, avoiding unexpected dangerous or expensive agent actions. Among existing methods, the return value of a trajectory is calculated through Markov decision process (MDP)-based rewards summation of sequential states’ actions, and the aim of a new policy is to achieve the minimum variances compared with return values from existing trajectory data. However, such methods ignore to guide the influence of key states in OPE, which are critical to success and should be set with more preference as well as the return value bias. In this paper, we develop a configurable OPE with key state-based bias constraints. We first adopt FP-Growth to mine the key states and get corresponding reward expectations of key states. Through further configuring every reward expectation scope as bias constraint, we then construct new goal function with the combination of bias and variance and realize a guided importance sampling-based OPE. Taking the GridWorld game as our experiment platform, we evaluate our method with performance analysis and case studies, as well as make comparisons with mainstream methods to show the effectiveness.
更多
查看译文
关键词
OPE, Reinforcement Learning, Trajectory, Bias expectation, Key state
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要