Large Language Models as Data Augmenters for Cold-Start Item Recommendation
CoRR(2024)
摘要
The reasoning and generalization capabilities of LLMs can help us better
understand user preferences and item characteristics, offering exciting
prospects to enhance recommendation systems. Though effective while user-item
interactions are abundant, conventional recommendation systems struggle to
recommend cold-start items without historical interactions. To address this, we
propose utilizing LLMs as data augmenters to bridge the knowledge gap on
cold-start items during training. We employ LLMs to infer user preferences for
cold-start items based on textual description of user historical behaviors and
new item descriptions. The augmented training signals are then incorporated
into learning the downstream recommendation models through an auxiliary
pairwise loss. Through experiments on public Amazon datasets, we demonstrate
that LLMs can effectively augment the training signals for cold-start items,
leading to significant improvements in cold-start item recommendation for
various recommendation models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要