Source-Free Multi-Domain Adaptation with Generally Auxiliary Model Training

2022 International Joint Conference on Neural Networks (IJCNN)(2022)

引用 2|浏览16
暂无评分
摘要
Unsupervised domain adaptation transfers gained knowledge from labeled source domain(s) to a similar unlabeled target domain by eliminating the domain shifts. Most existing domain adaptation methods require the access to source data to match the source and target distributions. However, data privacy concerns make it difficult or impossible to share source data, leading to failures in existing domain adaptation methods. Admittedly, a few previous studies deal with domain adaptation without source data, but they rarely pay heed to data free domain adaptation with multiple source domains containing richer knowledge. In this paper, we propose a new multi-source data-free domain adaptation method- generally auxiliary model training (GAM)- which fits the source models to the target domain under the supervision of pseudo target labels rather than matching data distributions. To collect high-quality initial pseudo target labels, our approach learns both specific and general source models to improve the generality of source models based on auxiliary learning. Going further, we introduce a class balanced coefficient of each category based on the number of samples to reduce the misclassification often caused by data imbalance. Experiments on real-world classification datasets show that the propsosed generally auxiliary training has a superiority over the baselines.
更多
查看译文
关键词
data imbalance,source-free multidomain adaptation,labeled source domain,target distribution,data privacy,pseudotarget labels,data distribution,unlabeled target domain,multisource data-free domain adaptation,generally auxiliary model training,GAM training,unsupervised domain adaptation,knowledge transfers,class balanced coefficient
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要