谷歌浏览器插件
订阅小程序
在清言上使用

Learning to Rank in the Age of Muppets: Effectiveness-Efficiency Tradeoffs in Multi-Stage Ranking.

SUSTAINLP(2021)

引用 9|浏览2
暂无评分
摘要
It is well known that rerankers built on pretrained transformer models such as BERT have dramatically improved retrieval effectiveness in many tasks.However, these gains have come at substantial costs in terms of efficiency, as noted by many researchers.In this work, we show that it is possible to retain the benefits of transformer-based rerankers in a multi-stage reranking pipeline by first using feature-based learning-to-rank techniques to reduce the number of candidate documents under consideration without adversely affecting their quality in terms of recall.Applied to the MS MARCO passage and document ranking tasks, we are able to achieve the same level of effectiveness, but with up to 18× increase in efficiency.Furthermore, our techniques are orthogonal to other methods focused on accelerating transformer inference, and thus can be combined for even greater efficiency gains.A higher-level message from our work is that, even though pretrained transformers dominate the modern IR landscape, there are still important roles for "traditional" LTR techniques, and that we should not forget history.
更多
查看译文
关键词
Transfer Learning,Representation Learning,Meta-Learning,Pretrained Models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要