Generating Target Image-Label Pairs For Unsupervised Domain Adaptation

IEEE TRANSACTIONS ON IMAGE PROCESSING(2020)

引用 28|浏览77
暂无评分
摘要
Deep learning demonstrates its impressive success across various machine learning problems. However, its performance often suffers in the case where the training and test data sets follow different distributions, due to the domain shift. Most current domain adaptation methods minimize the discrepancy between the source and target domains by enforcing the alignment of their marginal distributions without considering the class-level matching. Consequently, data from different classes may become close together after mapping. To address this issue, we propose an unsupervised domain adaptation method by generating image-label pairs in the target domain, in which the model is augmented with the generated target pairs and achieve class-level transfer. Specifically, we integrate generative adversarial networks (GAN) into the model predictor, where the generator fed with labels aims to produce corresponding target domain images with a well-designed semantic loss. Meanwhile, compared to previous methods which focus on discrepancy reduction across domains, i.e., image to image translation, our model focuses on semantic preservation during image generation. Our model is straightforward yet effective for unsupervised domain adaptation problems. Without any labels in the target domain in all the experiments, we demonstrate the validity of our approach by presenting the plausible generated target image-label pairs. In addition, our proposed method achieves the best or comparable performance on multiple unsupervised domain adaptation datasets which include image classification and semantic segmentation.
更多
查看译文
关键词
Semantics, Adaptation models, Task analysis, Image segmentation, Gallium nitride, Data models, Feature extraction, Domain adaptation, generative adversarial network, image classification, semantic segmentation, image generation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要