Flocking and Collision Avoidance for a Dynamic Squad of Fixed-Wing UAVs Using Deep Reinforcement Learning

2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2021)

引用 11|浏览6
暂无评分
摘要
Developing the flocking behavior for a dynamic squad of fixed-wing UAVs is still a challenge due to kinematic complexity and environmental uncertainty. In this paper, we deal with the decentralized flocking and collision avoidance problem through deep reinforcement learning (DRL). Specifically, we formulate a decentralized DRL-based decision making framework from the perspective of every follower, where a collision avoidance mechanism is integrated into the flocking controller. Then, we propose a novel reinforcement learning algorithm PS-CACER for training a shared control policy for all the followers. Besides, we design a plug-n-play embedding module based on convolutional neural networks and the attention mechanism. As a result, the variable-length system state can be encoded into a fixed-length embedding vector, which makes the learned DRL policy independent with the number and the order of followers. Finally, numerical simulation results demonstrate the effectiveness of the proposed method, and the learned policies can be directly transferred to semi-physical simulation without any parameter finetuning.
更多
查看译文
关键词
flocking collision avoidance,dynamic squad,fixed-wing UAVs,deep reinforcement learning,flocking behavior,kinematic complexity,environmental uncertainty,decentralized flocking,collision avoidance problem,decentralized DRL-based decision,collision avoidance mechanism,flocking controller,reinforcement learning algorithm PS-CACER,shared control policy,plug-n-play embedding module,variable-length system state,fixed-length embedding vector,learned DRL policy,learned policies
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要