CompUDA: Compositional Unsupervised Domain Adaptation for Semantic Segmentation under Adverse Conditions

2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2023)

引用 0|浏览3
暂无评分
摘要
In autonomous driving, performing robust semantic segmentation under adverse weather conditions is a long-standing challenge. Imperfect camera observations under adverse conditions result in images with reduced visibility, which hinders label annotation and semantic scene understanding based on these images. A common solution is to adopt semantic segmentation models trained in a source domain with ground truth labels and perform unsupervised domain adaptation (UDA) from the source domain to an unlabeled target domain that has adverse conditions. Due to imperfect visual observations in the target domain, such adaptation needs special treatment to achieve good performance. In this paper, we propose a new compositional unsupervised domain adaptation (CompUDA) method that disentangles the domain gap based on multiple factors including style, visibility, and image quality. The domain gaps caused by these individual factors can then be addressed separately by introducing the intermediate domains. Specifically, 1) to address the style gap, we perform source-to-intermediate domain adaptation and generate pseudo-labels for self-training in the target domain; 2) to address the visibility gap, we perform a geometry-aligned normal-to-adverse image translation and introduce a synthetic domain; 3) finally, to address the image quality gap between the synthetic and target domain, we perform a synthetic-to-real adaptation based on the generated pseudo-labels. Our compositional unsupervised domain adaptation can be used in conjunction with a wide variety of semantic segmentation methods and result in significant performance improvement across datasets. The codes are available at https: //github.com/zhengziqiang/CompUDA.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要