Explain then Rank: Scale Calibration of Neural Rankers Using Natural Language Explanations from Large Language Models
CoRR(2024)
摘要
The process of scale calibration in ranking systems involves adjusting the
outputs of rankers to correspond with significant qualities like click-through
rates or relevance, crucial for mirroring real-world value and thereby boosting
the system's effectiveness and reliability. Although there has been research on
calibrated ranking losses within learning-to-rank models, the particular issue
of adjusting the scale for neural rankers, which excel in handling textual
information, has not been thoroughly examined. Neural ranking models are adept
at processing text data, yet the application of existing scale calibration
techniques to these models poses significant challenges due to their complexity
and the intensive training they require, often resulting in suboptimal
outcomes.
This study delves into the potential of large language models (LLMs) to
provide uncertainty measurements for a query and document pair that correlate
with the scale-calibrated scores. By employing Monte Carlo sampling to gauge
relevance probabilities from LLMs and incorporating natural language
explanations (NLEs) to articulate this uncertainty, we carry out comprehensive
tests on two major document ranking datasets. Our findings reveal that the
approach leveraging NLEs outperforms existing calibration methods under various
training scenarios, leading to better calibrated neural rankers.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要