ADriver-I: A General World Model for Autonomous Driving.
CoRR(2023)
摘要
Typically, autonomous driving adopts a modular design, which divides the full
stack into perception, prediction, planning and control parts. Though
interpretable, such modular design tends to introduce a substantial amount of
redundancy. Recently, multimodal large language models (MLLM) and diffusion
techniques have demonstrated their superior performance on comprehension and
generation ability. In this paper, we first introduce the concept of
interleaved vision-action pair, which unifies the format of visual features and
control signals. Based on the vision-action pairs, we construct a general world
model based on MLLM and diffusion model for autonomous driving, termed
ADriver-I. It takes the vision-action pairs as inputs and autoregressively
predicts the control signal of the current frame. The generated control signals
together with the historical vision-action pairs are further conditioned to
predict the future frames. With the predicted next frame, ADriver-I performs
further control signal prediction. Such a process can be repeated infinite
times, ADriver-I achieves autonomous driving in the world created by itself.
Extensive experiments are conducted on nuScenes and our large-scale private
datasets. ADriver-I shows impressive performance compared to several
constructed baselines. We hope our ADriver-I can provide some new insights for
future autonomous driving and embodied intelligence.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要