FusionAD: Multi-modality Fusion for Prediction and Planning Tasks of Autonomous Driving

Tengju Ye,Wei Jing,Chunyong Hu, Shikun Huang,Lingping Gao, Fangzhen Li,Jingke Wang,Ke Guo, Wencong Xiao,Weibo Mao, Hang Zheng, Kun Li,Junbo Chen,Kaicheng Yu

CoRR(2023)

引用 0|浏览29
暂无评分
摘要
Building a multi-modality multi-task neural network toward accurate and robust performance is a de-facto standard in perception task of autonomous driving. However, leveraging such data from multiple sensors to jointly optimize the prediction and planning tasks remains largely unexplored. In this paper, we present FusionAD, to the best of our knowledge, the first unified framework that fuse the information from two most critical sensors, camera and LiDAR, goes beyond perception task. Concretely, we first build a transformer based multi-modality fusion network to effectively produce fusion based features. In constrast to camera-based end-to-end method UniAD, we then establish a fusion aided modality-aware prediction and status-aware planning modules, dubbed FMSPnP that take advantages of multi-modality features. We conduct extensive experiments on commonly used benchmark nuScenes dataset, our FusionAD achieves state-of-the-art performance and surpassing baselines on average 15% on perception tasks like detection and tracking, 10% on occupancy prediction accuracy, reducing prediction error from 0.708 to 0.389 in ADE score and reduces the collision rate from 0.31% to only 0.12%.
更多
查看译文
关键词
autonomous driving,fusionad,planning tasks,multi-modality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要