Improving Pairwise Learning For Item Recommendation From Implicit Feedback

WSDM(2014)

引用 422|浏览627
暂无评分
摘要
Pairwise algorithms are popular for learning recommender systems from implicit feedback. For each user, or more generally context, they try to discriminate between a small set of selected items and the large set of remaining (irrelevant) items. Learning is typically based on stochastic gradient descent (SGD) with uniformly drawn pairs. In this work, we show that convergence of such SGD learning algorithms slows down considerably if the item popularity has a tailed distribution. We propose a non -uniform item sampler to overcome this problem. The proposed sampler is context -dependent and oversamples informative pairs to speed up convergence. An efficient implementation with constant amortized runtime costs is developed. Furthermore, it is shown how the proposed learning algorithm can be applied to a large class of recommender models. The properties of the new learning algorithm are studied empirically on two real -world recommender system problems. The experiments indicate that the proposed adaptive sampler improves the state -of -the art learning algorithm largely in convergence without negative effects on prediction quality or iteration runtime.
更多
查看译文
关键词
Item Recommendation,Recommender S torization,Factorization Model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要