Gender, Status, and Openness to Being Wrong: Field Experimental Evidence from Scientific Peer Review

semanticscholar

引用 0|浏览3
暂无评分
摘要
Many organizations, particularly those in the domain of scientific research, rely on experts to evaluate new ideas. However, when the objects to be evaluated are complex and require the opinions of multiple experts, it is unclear whether experts should provide evaluations independently or collaboratively. Although normative models of decision-making suggest that information exchange among individuals improves judgments, it is unknown whether and under what conditions experts actually utilize information from one another. Here, we report an experiment that measures information utilization among 277 expert reviewers of 47 multidisciplinary applications for awards. Reviewers were faculty at US-based medical schools. In particular, we measure whether reviewers do or do not update how they score applications after observing the scores of artificial “other reviewers.” The scores of other reviewers were randomly generated and their discipline was experimentally assigned to be same or different to that of the reviewer. We found that reviewers updated scores in 47% of cases after exposure to the artificial stimuli. Contrary to normative models, reviewers were insensitive to the disciplinary expertise of the stimulus. Much more important was the reviewer’s own identity: female reviewers updated their scores 12% more often than males. Similarly, reviewers with relatively high status (​H​-index) updated substantially less often than low-status reviewers. Lastly, updating was more common for the mediumand high-scoring applications, leading to high turnover in the top proposals before and after exposure to the stimuli. The experiment reveals extends findings on social influence within non-expert groups to experts, and suggests a new pathway through which bias can enter evaluations through the gendered openness to external information.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要