Make Large Language Model a Better Ranker
arxiv(2024)
摘要
The evolution of Large Language Models (LLMs) has significantly enhanced
capabilities across various fields, leading to a paradigm shift in how
Recommender Systems (RSs) are conceptualized and developed. However, existing
research primarily focuses on point-wise and pair-wise recommendation
paradigms. These approaches prove inefficient in LLM-based recommenders due to
the high computational cost of utilizing Large Language Models. While some
studies have delved into list-wise approaches, they fall short in ranking
tasks. This shortfall is attributed to the misalignment between the objectives
of ranking and language generation. To this end, this paper introduces the
Language Model Framework with Aligned Listwise Ranking Objectives (ALRO). ALRO
is designed to bridge the gap between the capabilities of LLMs and the nuanced
requirements of ranking tasks within recommender systems. A key feature of ALRO
is the introduction of soft lambda loss, an adaptation of lambda loss tailored
to suit language generation tasks. Additionally, ALRO incorporates a
permutation-sensitive learning mechanism that addresses position bias, a
prevalent issue in generative models, without imposing additional computational
burdens during inference. Our evaluative studies reveal that ALRO outperforms
existing embedding-based recommendation methods and the existing LLM-based
recommendation baselines, highlighting its efficacy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要