SACPlanner: Real-World Collision Avoidance with a Soft Actor Critic Local Planner and Polar State Representations

arxiv(2023)

引用 3|浏览11
暂无评分
摘要
We study the training performance of ROS local planners based on Reinforcement Learning (RL), and the trajectories they produce on real-world robots. We show that recent enhancements to the Soft Actor Critic (SAC) algorithm such as RAD and DrQ achieve almost perfect training after only 10000 episodes. We also observe that on real-world robots the resulting SACPlanner is more reactive to obstacles than traditional ROS local planners such as DWA.
更多
查看译文
关键词
perfect training,polar state representations,real-world collision avoidance,real-world robots,recent enhancements,Reinforcement Learning,resulting SACPlanner,Soft Actor Critic algorithm,Soft Actor Critic local planner,traditional ROS local planners,training performance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要