Enhanced Learning to Rank using Cluster-loss Adjustment.

COMAD/CODS(2019)

引用 3|浏览1
暂无评分
摘要
Most Learning To Rank (LTR) algorithms like Ranking SVM, RankNet, LambdaRank and LambdaMART use only relevance label judgments as ground truth for training. But in common scenarios like ranking of information cards (google now, other personal assistants), mobile notifications, netflix recommendations, etc. there is additional information which can be captured from user behavior and how user interacts with the retrieved items. Within the relevance labels, there might be different sets whose information (i.e. cluster information) can be derived implicitly from user interaction (positive, negative, neutral, etc.) or from explicit-user feedback ('Do not show again', 'I like this suggestion', etc). This additional information provides significant knowledge for training any ranking algorithm using two-dimensional output variable. This paper proposes a novel method to use the relevance label along with cluster information to better train the ranking models. Results for user-trial Notification Ranking dataset and standard datasets like LETOR 4.0, MSLR-WEB10K and YahooLTR further support this claim.
更多
查看译文
关键词
Information Retrieval, Learning to Rank, Preference Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要