Chrome Extension
WeChat Mini Program
Use on ChatGLM

MMFN: Multi-Modal-Fusion-Net for End-to-End Driving

IEEE/RJS International Conference on Intelligent RObots and Systems (IROS)(2022)

Cited 13|Views28
No score
Abstract
Inspired by the fact that humans use diverse sensory organs to perceive the world, sensors with different modalities are deployed in end-to-end driving to obtain the global context of the 3D scene. In previous works, camera and LiDAR inputs are fused through transformers for better driving performance. These inputs are normally further interpreted as high-level map information to assist navigation tasks. Nevertheless, extracting useful information from the complex map input is challenging, for redundant information may mislead the agent and negatively affect driving performance. We propose a novel approach to efficiently extract features from vectorized High-Definition (HD) maps and utilize them in end-to-end driving tasks. In addition, we design a new expert to enhance the model performance by considering multi-road rules. Experimental results prove that both proposed improvements enable our agent to achieve superior performance compared with other methods.
More
Translated text
Key words
camera,complex map input,diverse sensory organs,driving performance,end-to-end driving tasks,high-level map information,multimodal-fusion-net,vectorized High-Definition maps
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined