Deep visual foresight for planning robot motion

ICRA(2017)

引用 829|浏览232
暂无评分
摘要
A key challenge in scaling up robot learning to many skills and environments is removing the need for human supervision, so that robots can collect their own data and improve their own performance without being limited by the cost of requesting human feedback. Model-based reinforcement learning holds the promise of enabling an agent to learn to predict the effects of its actions, which could provide flexible predictive models for a wide range of tasks and environments, without detailed human supervision. We develop a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data. Our approach does not require a calibrated camera, an instrumented training set-up, nor precise sensing and actuation. Our results show that our method enables a real robot to perform nonprehensile manipulation - pushing objects - and can handle novel objects not seen during training.
更多
查看译文
关键词
robot motion planning,robot learning,model-based reinforcement learning,predictive models,deep action-conditioned video prediction models,model-predictive control,nonprehensile manipulation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要