Estimating the Hessian Matrix of Ranking Objectives for Stochastic Learning to Rank with Gradient Boosted Trees
arxiv(2024)
摘要
Stochastic learning to rank (LTR) is a recent branch in the LTR field that
concerns the optimization of probabilistic ranking models. Their probabilistic
behavior enables certain ranking qualities that are impossible with
deterministic models. For example, they can increase the diversity of displayed
documents, increase fairness of exposure over documents, and better balance
exploitation and exploration through randomization. A core difficulty in LTR is
gradient estimation, for this reason, existing stochastic LTR methods have been
limited to differentiable ranking models (e.g., neural networks). This is in
stark contrast with the general field of LTR where Gradient Boosted Decision
Trees (GBDTs) have long been considered the state-of-the-art.
In this work, we address this gap by introducing the first stochastic LTR
method for GBDTs. Our main contribution is a novel estimator for the
second-order derivatives, i.e., the Hessian matrix, which is a requirement for
effective GBDTs. To efficiently compute both the first and second-order
derivatives simultaneously, we incorporate our estimator into the existing
PL-Rank framework, which was originally designed for first-order derivatives
only. Our experimental results indicate that stochastic LTR without the Hessian
has extremely poor performance, whilst the performance is competitive with the
current state-of-the-art with our estimated Hessian. Thus, through the
contribution of our novel Hessian estimation method, we have successfully
introduced GBDTs to stochastic LTR.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要