Quantifying replicability of multiple studies in a meta-analysis

ANNALS OF APPLIED STATISTICS(2024)

Cited 0|Views11
No score
Abstract
For valid scientific discoveries, it is fundamental to evaluate whether research findings are replicable across different settings. While large-scale replication projects across broad research topics are not feasible, systematic reviews and meta-analyses (SRMAs) offer viable alternatives to assess replicability. Due to subjective inclusion and exclusion of studies, SRMAs may contain nonreplicable study findings. However, there is no consensus on rigorous methods to assess the replicability of SRMAs or to explore sources of nonreplicability. Nonreplicability is often misconceived as high heterogeneity. This article introduces a new measure, the externally standardized residuals from a leave-m-studies-out procedure, to quantify replicability. It not only measures the impact of nonreplicability from unknown sources on the conclusion of an SRMA but also differentiates nonreplicability from heterogeneity. A new test statistic for replicability is derived. We explore its asymptotic properties and use extensive simulations and real data to illustrate this measure's performance. We conclude that replicability should be routinely assessed for all SRMAs and recommend sensitivity analyses, once nonreplicable study results are identified in an SRMA.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined