Towards Unsupervised Domain Adaptation via Domain-Transformer

arxiv(2022)

引用 0|浏览26
暂无评分
摘要
As a vital problem in pattern analysis and machine intelligence, Unsupervised Domain Adaptation (UDA) studies how to transfer an effective feature learner from a labeled source domain to an unlabeled target domain. Plenty of methods based on Convolutional Neural Networks (CNNs) have achieved promising results in the past decades. Inspired by the success of Transformers, some methods attempt to tackle UDA problem by adopting pure transformer architectures, and interpret the models by applying the long-range dependency strategy at image patch-level. However, the algorithmic complexity is high and the interpretability seems weak. In this paper, we propose the Domain-Transformer (DoT) for UDA, which integrates the CNN-backbones and the core attention mechanism of Transformers from a new perspective. Specifically, a plug-and-play domain-level attention mechanism is proposed to learn the sample correspondence between domains. This is significantly different from existing methods which only capture the local interactions among image patches. Instead of explicitly modeling the distribution discrepancy from either domain-level or class-level, DoT learns transferable features by achieving the local semantic consistency across domains, where the domain-level attention and manifold regularization are explored. Then, DoT is free of pseudo-labels and explicit domain discrepancy optimization. Theoretically, DoT is connected with the optimal transportation algorithm and statistical learning theory. The connection provides a new insight to understand the core component of Transformers. Extensive experiments on several benchmark datasets validate the effectiveness of DoT.
更多
查看译文
关键词
unsupervised domain adaptation,domain-transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要