Dual-Space Transfer Learning Based on an Indirect Mutual Promotion Strategy

International Journal of Computational Intelligence Systems(2022)

引用 0|浏览11
暂无评分
摘要
Transfer learning is designed to leverage knowledge in the source domain with labels to help build classification models in the target domain where labels are scarce or even unavailable. Previous studies have shown that high-level concepts extracted from original features are more suitable for cross-domain classification tasks, so many transfer learning methods transfer knowledge by modeling high-level concepts on the original feature space. However, there are two limitations to this method: First, learning high-level concepts directly on the original feature space will reduce the proportion of shared information contained in common features in the process of knowledge transfer bridge construction. Second, only learning multiple high-level concepts on the original feature space, the latent shared information contained in the domain-specific features cannot be targeted learned, so the latent shared information in the domain-specific features cannot be effectively used. To overcome these limitations, this paper proposes a novel method named Dual-Space Transfer Learning based on an Indirect Mutual Promotion Strategy (DSTL). The DSTL method is formalized as an optimization problem based on non-negative matrix tri-factorization. DSTL first extracts the common features between domains and constructs the common feature space. Then, the learning of the high-level concepts of the common feature space and the original feature space is integrated through an indirect promotion strategy, which can enhance the learning effect of common features and domain-specific features through the mutual help of the two feature spaces. The system test on benchmark data sets shows the superiority of the DSTL method.
更多
查看译文
关键词
Cross-domain text classification, Dual-space transfer learning, High-level concepts, Non-negative matrix tri-factorization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要