Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples

user-5e9d449e4c775e765d44d7c9(2021)

引用 1|浏览131
暂无评分
摘要
Exploration in reinforcement learning is, in general, a challenging problem. In this work, we study a more tractable class of reinforcement learning problems defined by data that provides examples of successful outcome states. In this case, the reward function can be obtained automatically by training a classifier to classify states as successful or not. We argue that, with appropriate representation and regularization, such a classifier can guide a reinforcement learning algorithm to an effective solution. However, as we will show, this requires the classifier to make uncertainty-aware predictions that are very difficult with standard deep networks. To address this, we propose a novel mechanism for obtaining calibrated uncertainty based on an amortized technique for computing the normalized maximum likelihood distribution. We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions from data, while being able to guide algorithms towards the specified goal more effectively. We show how using amortized normalized maximum likelihood for reward inference is able to provide effective reward guidance for solving a number of challenging navigation and robotic manipulation tasks which prove difficult for other algorithms.
更多
查看译文
关键词
Reinforcement learning,Naive Bayes classifier,Classifier (linguistics),Inference,Bayesian probability,Machine learning,Regularization (mathematics),Computer science,Artificial intelligence,Effective solution,Normalized maximum likelihood
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要