PathRL: An End-To-End Path Generation Method for Collision Avoidance Via Deep Reinforcement Learning

ICRA 2024(2024)

引用 0|浏览17
暂无评分
摘要
Robot navigation using deep reinforcement learning (DRL) has shown great potential in improving the performance of mobile robots. Nevertheless, most existing DRL-based navigation methods primarily focus on training a policy that directly commands the robot with low-level controls, like linear and angular velocities, which leads to unstable speeds and unsmooth trajectories of the robot during the long-term execution. An alternative method is to train a DRL policy that outputs the navigation path directly. Then the robot can follow the generated path smoothly using sophisticated velocity-planning and path-following controllers, whose parameters are specified according to the hardware platform. However, two roadblocks arise for training a DRL policy that outputs paths:(1) The action space for potential paths often involves higher dimensions comparing to low-level commands, which increases the difficulties of training; (2) It takes multiple time steps to track a path instead of a single time step, which requires the path to predicate the interactions of the robot w.r.t. the dynamic environment in multiple time steps. This, in turn, amplifies the challenges associated with training. In response to these challenges, we propose PathRL, a novel DRL method that trains the policy to generate the navigation path for the robot. Specifically, we employ specific action space discretization techniques and tailored state space representation methods to address the associated challenges. Curriculum learning is employed to expedite the training process, while the reward function also takes into account the smooth transition between adjacent paths. In our experiments, PathRL achieves better success rates and reduces angular rotation variability compared to other DRL navigation methods, facilitating stable and smooth robot movement. We demonstrate the competitive edge of PathRL in both real-world scenarios and multiple challenging simulation environments.
更多
查看译文
关键词
Reinforcement Learning,Collision Avoidance,Motion and Path Planning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要