Attention-Based Deep Driving Model for Autonomous Vehicles with Surround-View Cameras

2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2022)

引用 0|浏览28
暂无评分
摘要
Experienced human drivers always make safe driving decisions by selectively observing the front, rear and side- view mirrors. Several end - to-end methods have been pro-posed to learn driving models with multi-view visual infor-mation. However, these benchmark methods lack semantic understanding of multi-view image contents, where human drivers usually reason these information for decision making with different visual region of interests. In this paper, we propose an attention-based deep learning method to learn a driving model with input of surround-view visual information and the route planner, in which a multi-view attention module is designed for obtaining region of interests from human drivers. We evaluate our model on the Drive360 dataset with comparison of benchmarking deep driving models. Results demonstrate that our model achieves a competitive accuracy in both steering angle and speed prediction than benchmarking methods. Code is available at https://githuh.com/jet-uestc/MVA-Net.
更多
查看译文
关键词
deep driving model,autonomous vehicles,attention-based,surround-view
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要