Learning Domain-Independent Representations via Shared Weight Auto-Encoder for Transfer Learning in Recommender Systems

IEEE ACCESS(2022)

引用 0|浏览17
暂无评分
摘要
Despite many recent advances, state-of-the-art recommender systems still struggle to achieve good performance with sparse datasets. To address the sparsity issue, transfer learning techniques have been investigated for recommender systems, but they tend to impose strict constraints on the content and structure of the data in the source and target domains. For transfer learning methods to work well, there should normally be homogeneity between source and target domains, or a high degree of overlap between the source and target items. In this paper we propose a novel transfer learning framework for mitigating the effects of sparsity and insufficient data. Our method requires neither homogeneity nor overlap between the source and target domains. We describe and evaluate a shared parameter auto-encoder to jointly learn representations of user/item aspects in two domains, applying Maximum Mean Discrepancy (MMD) loss during training to ensure that the source and target representations are similar in the distribution space. The approach is evaluated using a number of benchmark datasets to demonstrate improved recommendation performance when learned representations are used in collaborative filtering. The code used for this work is available on github.com.
更多
查看译文
关键词
Transfer learning, Recommender systems, Adaptation models, Social networking (online), Task analysis, Speech recognition, Measurement, Recommender system, neural networks, transfer learning, domain adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要