CRISP: Curriculum inducing Primitive Informed Subgoal Prediction
arxiv(2023)
Abstract
Hierarchical reinforcement learning (HRL) is a promising approach that uses
temporal abstraction to solve complex long horizon problems. However,
simultaneously learning a hierarchy of policies is unstable as it is
challenging to train higher-level policy when the lower-level primitive is
non-stationary. In this paper, we present CRISP, a novel HRL algorithm that
effectively generates a curriculum of achievable subgoals for evolving
lower-level primitives using reinforcement learning and imitation learning.
CRISP uses the lower level primitive to periodically perform data relabeling on
a handful of expert demonstrations, using a novel primitive informed parsing
(PIP) approach, thereby mitigating non-stationarity. Since our approach only
assumes access to a handful of expert demonstrations, it is suitable for most
robotic control tasks. Experimental evaluations on complex robotic maze
navigation and robotic manipulation tasks demonstrate that inducing
hierarchical curriculum learning significantly improves sample efficiency, and
results in efficient goal conditioned policies for solving temporally extended
tasks. Additionally, we perform real world robotic experiments on complex
manipulation tasks and demonstrate that CRISP demonstrates impressive
generalization in real world scenarios.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined