Motion Planner Augmented Action Spaces for Reinforcement Learning

RSS Workshop on Action Representations for Learning in Continuous Control(2020)

Cited 3|Views57
No score
Abstract
Deep reinforcement learning (RL) agents are able to learn contact-rich manipulation tasks by maximizing a reward signal, but require large amounts of experience, especially in environments with many obstacles that complicate efficient exploration. In contrast, motion planners use explicit models of agent and environment to plan collision-free paths to faraway goals, but suffer from model-inaccuracies in contact-rich tasks. In this work, we propose to combine the benefits of both approaches by formulating a novel action space for continuous robotic control tasks that equips RL agents with long-horizon planning capabilities. Using this action space we train modelfree RL agents that learn to decide when to make use of the motion planner purely from reward signals. On multiple simulated object manipulation tasks, we show that our motionplanner augmented action space increases learning efficiency and facilitates exploration in environments with many obstacles.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined