Viewer-Centred Surface Completion for Unsupervised Domain Adaptation in 3D Object Detection

arxiv(2022)

引用 9|浏览8
暂无评分
摘要
Every autonomous driving dataset has a different configuration of sensors, originating from distinct geographic regions and covering various scenarios. As a result, 3D detectors tend to overfit the datasets they are trained on. This causes a drastic decrease in accuracy when the detectors are trained on one dataset and tested on another. We observe that lidar scan pattern differences form a large component of this reduction in performance. We address this in our approach, SEE-VCN, by designing a novel viewer-centred surface completion network (VCN) to complete the surfaces of objects of interest within an unsupervised domain adaptation framework, SEE. With SEE-VCN, we obtain a unified representation of objects across datasets, allowing the network to focus on learning geometry, rather than overfitting on scan patterns. By adopting a domain-invariant representation, SEE-VCN can be classed as a multi-target domain adaptation approach where no annotations or re-training is required to obtain 3D detections for new scan patterns. Through extensive experiments, we show that our approach outperforms previous domain adaptation methods in multiple domain adaptation settings. Our code and data are available at https://github.com/darrenjkt/SEE-VCN.
更多
查看译文
关键词
3D object detection,autonomous driving dataset,distinct geographic regions,domain-invariant representation,drastic decrease,lidar scan pattern differences,multiple domain adaptation settings,multitarget domain adaptation approach,novel viewer-centred surface completion network,previous domain adaptation methods,scan patterns,unsupervised domain adaptation framework
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要