Sample, Crop, Track: Self-Supervised Mobile 3D Object Detection for Urban Driving LiDAR

arxiv(2023)

引用 0|浏览26
暂无评分
摘要
Deep learning has led to great progress in the detection of mobile (i.e. movement-capable) objects in urban driving scenes in recent years. Supervised approaches typically require the annotation of large training sets; there has thus been great interest in leveraging weakly, semi- or self- supervised methods to avoid this, with much success. Whilst weakly and semi-supervised methods require some annotation, self-supervised methods have used cues such as motion to relieve the need for annotation altogether. However, a complete absence of annotation typically degrades their performance, and ambiguities that arise during motion grouping can inhibit their ability to find accurate object boundaries. In this paper, we propose a new self-supervised mobile object detection approach called SCT. This uses both motion cues and expected object sizes to improve detection performance, and predicts a dense grid of 3 $D$ oriented bounding boxes to improve object discovery. We significantly outperform the state-of-the-art self-supervised mobile object detection method TCR on the KITTI tracking benchmark, and achieve performance that is within 30 % of the fully supervised PV-RCNN++ method for IoUs $\leq$ 0.5. Our source code will be made available online.
更多
查看译文
关键词
accurate object boundaries,deep learning,detection performance,fully supervised PV-RCNN++ method,motion cues,motion grouping,movement-capable,object discovery,self-supervised methods,self-supervised mobile 3D,self-supervised mobile object detection approach,semisupervised methods,state-of-the-art self-supervised mobile object detection method TCR,supervised approaches,training sets,urban driving LiDAR,urban driving scenes
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要