Dexterous Legged Locomotion in Confined 3D Spaces with Reinforcement Learning
arxiv(2024)
摘要
Recent advances of locomotion controllers utilizing deep reinforcement
learning (RL) have yielded impressive results in terms of achieving rapid and
robust locomotion across challenging terrain, such as rugged rocks, non-rigid
ground, and slippery surfaces. However, while these controllers primarily
address challenges underneath the robot, relatively little research has
investigated legged mobility through confined 3D spaces, such as narrow tunnels
or irregular voids, which impose all-around constraints. The cyclic gait
patterns resulted from existing RL-based methods to learn parameterized
locomotion skills characterized by motion parameters, such as velocity and body
height, may not be adequate to navigate robots through challenging confined 3D
spaces, requiring both agile 3D obstacle avoidance and robust legged
locomotion. Instead, we propose to learn locomotion skills end-to-end from
goal-oriented navigation in confined 3D spaces. To address the inefficiency of
tracking distant navigation goals, we introduce a hierarchical locomotion
controller that combines a classical planner tasked with planning waypoints to
reach a faraway global goal location, and an RL-based policy trained to follow
these waypoints by generating low-level motion commands. This approach allows
the policy to explore its own locomotion skills within the entire solution
space and facilitates smooth transitions between local goals, enabling
long-term navigation towards distant goals. In simulation, our hierarchical
approach succeeds at navigating through demanding confined 3D environments,
outperforming both pure end-to-end learning approaches and parameterized
locomotion skills. We further demonstrate the successful real-world deployment
of our simulation-trained controller on a real robot.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要