AIRA-DA: Adversarial Image Reconstruction Alignments for Unsupervised Domain Adaptive Object Detection

IEEE Robotics and Automation Letters(2023)

引用 0|浏览31
暂无评分
摘要
Unsupervised domain adaptive object detection is a challenging perception task where object detectors are adapted from a label-rich source domain to an unlabeled target domain, playing a vital role in autonomous driving and robot navigation. Since the camera settings, weather, and light conditions vary depending on the road, it is labor-intensive and impossible to manually annotate all the data to adapt to these scenarios. Recent advances demonstrate the effectiveness of adversarial-based domain alignment, where domain invariance in the feature space is produced by adversarial training between the feature extractor and domain discriminator. However, due to the domain shift, domain discrimination, especially on low-level features, is an easy task. This results in an imbalance of the adversarial training between the domain discriminator and the feature extractor. In this work, we achieve a better domain alignment by introducing an auxiliary regularization task to improve the training balance. Specifically, we propose Adversarial Image Reconstruction (AIR) as the regularizer to facilitate the adversarial training of the feature extractor. We further design a multi-level Feature Alignment(A) module to enhance the adaptation performance. We refer our work as AIRA-DA for ease of presentation. Our evaluations across several datasets of challenging domain shifts demonstrate that the proposed method outperforms all previous methods, of both one- and two-stage, in most settings. Extensive experiments shows that the proposed method is capable of handling weather variation, cross-camera adaptation and synthetic-to-real-world adaptation.
更多
查看译文
关键词
Feature extraction,Training,Detectors,Poles and towers,Task analysis,Image reconstruction,Object detection,Computer vision for automation,deep learning methods
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要