Deep Reinforcement Learning-Based Robot Exploration for Constructing Map of Unknown Environment

Information Systems Frontiers(2024)

引用 4|浏览0
暂无评分
摘要
In traditional environment exploration algorithms, two problems are still waiting to be solved. One is that as the exploration time increases, the robot will repeatedly explore the areas that have been explored. The other is that in order to explore the environment more accurately, the robot will cause slight collisions during the exploration process. In order to solve the two problems, a DQN-based exploration model is proposed, which enables the robot to quickly find the unexplored area in an unknown environment, and designs a DQN-based navigation model to solve the local minima problem generated by the robot during the exploration. Through the switching mechanism of exploration model and navigation model, the robot can quickly complete the exploration task through selecting the modes according to the environment exploration situation. In the experiment results, the difference between the proposed unknown environment exploration method and the previous known-environment exploration methods research is less than 5% under the same exploration time. And in the proposed method, the robot can achieve zero collision and almost zero repeated exploration of the area when it has been trained for 30w rounds. Therefore, it can be seen that the proposed method is more practical than the previous methods.
更多
查看译文
关键词
Robot exploration,DRL,Unknown environment,Constructing map
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要