Exploring Recurrent Long-term Temporal Fusion for Multi-view 3D Perception
arxiv(2023)
摘要
Long-term temporal fusion is a crucial but often overlooked technique in
camera-based Bird's-Eye-View (BEV) 3D perception. Existing methods are mostly
in a parallel manner. While parallel fusion can benefit from long-term
information, it suffers from increasing computational and memory overheads as
the fusion window size grows. Alternatively, BEVFormer adopts a recurrent
fusion pipeline so that history information can be efficiently integrated, yet
it fails to benefit from longer temporal frames. In this paper, we explore an
embarrassingly simple long-term recurrent fusion strategy built upon the
LSS-based methods and find it already able to enjoy the merits from both sides,
i.e., rich long-term information and efficient fusion pipeline. A temporal
embedding module is further proposed to improve the model's robustness against
occasionally missed frames in practical scenarios. We name this simple but
effective fusing pipeline VideoBEV. Experimental results on the nuScenes
benchmark show that VideoBEV obtains strong performance on various camera-based
3D perception tasks, including object detection (55.4% mAP and 62.9% NDS),
segmentation (48.6% vehicle mIoU), tracking (54.8% AMOTA), and motion
prediction (0.80m minADE and 0.463 EPA).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要