Using prediction markets to estimate ratings of academic research quality in a mock Research Excellence Framework exercise

Jacqueline Thompson, Anna Dreber, Tom R Gaunt,Michael Gordon,Felix Holzmeister,Juergen Huber,Magnus Johannesson,Michael Kirchler, Matthew Lyon, Ian Penton-Voak,Thomas Pfeiffer,Marcus Robert Munafo

crossref(2022)

引用 0|浏览4
暂无评分
摘要
Background: The Research Excellence Framework (REF) in the United Kingdom is a system to assess the quality of research, requiring higher education institutions to select a sample of research outputs to submit for assessment. We tested whether prediction markets, in a novel application, could help institutions decide which research outputs to submit to the REF. Methods: Prediction markets are tools used to aggregate knowledge from a group who trade on the outcomes of future events. We ran six different prediction markets across three institutions in four different academic fields, asking participants to predict which papers in their department would score above a certain threshold in the REF.Results: Despite low participation in some markets, overall the prediction markets predicted mock REF outcomes with greater than 70% accuracy. We also compared the results of one market to random forests and logistic regression models trained to predict REF scores using bibliometrics, and found they had similar accuracy with prediction markets and high correlation with final market price, suggesting trades were partly informed by publication metrics. Conclusions: Our study provides a proof of concept that prediction markets can be used to forecast REF ratings, and that they capture similar information to other methods. Due to their pragmatic limitations, prediction markets are unlikely to entirely replace other methods of REF selection or research assessment, but they may offer a useful addition to REF selection and may be valuable in training contexts.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要