Leave No One Behind: Online Self-Supervised Self-Distillation for Sequential Recommendation
arxiv(2024)
摘要
Sequential recommendation methods play a pivotal role in modern
recommendation systems. A key challenge lies in accurately modeling user
preferences in the face of data sparsity. To tackle this challenge, recent
methods leverage contrastive learning (CL) to derive self-supervision signals
by maximizing the mutual information of two augmented views of the original
user behavior sequence. Despite their effectiveness, CL-based methods encounter
a limitation in fully exploiting self-supervision signals for users with
limited behavior data, as users with extensive behaviors naturally offer more
information. To address this problem, we introduce a novel learning paradigm,
named Online Self-Supervised Self-distillation for Sequential Recommendation
($S^4$Rec), effectively bridging the gap between self-supervised learning and
self-distillation methods. Specifically, we employ online clustering to
proficiently group users by their distinct latent intents. Additionally, an
adversarial learning strategy is utilized to ensure that the clustering
procedure is not affected by the behavior length factor. Subsequently, we
employ self-distillation to facilitate the transfer of knowledge from users
with extensive behaviors (teachers) to users with limited behaviors (students).
Experiments conducted on four real-world datasets validate the effectiveness of
the proposed method\footnote{Code is available at https://github.com/xjaw/S4Rec
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要