A two-phase knowledge distillation model for graph convolutional network-based recommendation

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS(2022)

引用 8|浏览4
暂无评分
摘要
Graph convolutional network (GCN)-based recommendation has recently attracted significant attention in the recommender system community. Although current studies propose various GCNs to improve recommendation performance, existing methods suffer from two main limitations. First, user-item interaction data is generally sparse in practice, highlighting these methods' ineffectiveness in learning user and item feature representations. Second, they usually perform a dot-product operation to model and calculate user preferences on items, leading to inaccurate user preference learning. To address these limitations, this study adopts a design idea that sharply differs from existing works. Specifically, we introduce the knowledge distillation concept into GCN-based recommendation and propose a two-phase knowledge distillation model (TKDM) improving recommendation performance. In Phase I, a self-distillation method on a graph auto-encoder learns the user and item feature representations. This auto-encoder employs a simple two-layer GCN as an encoder and a fully connected layer as a decoder. On this basis, in Phase II, a mutual-distillation method on a fully connected layer is introduced to learn user preferences on items with triple-based Bayesian personalized ranking. Extensive experiments on three real-world data sets demonstrate that TKDM outperforms classic and state-of-the-art methods related to GCN-based recommendation problems.
更多
查看译文
关键词
deep learning, graph convolutional network, knowledge distillation, neural network, recommender system
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要