Evaluation of adherence to STARD for Abstracts in a diverse sample of diagnostic accuracy abstracts published in 2012 and 2019 reveals suboptimal reporting practices

Journal of Clinical Epidemiology(2024)

Cited 0|Views4
No score
Abstract
Objective To evaluate the completeness of reporting in a sample of abstracts on diagnostic accuracy studies before and after the release of STARD for Abstracts in 2017. Methods We included 278 diagnostic accuracy abstracts published in 2012 (N=138) and 2019 (N=140) and indexed in EMBASE. We analyzed their adherence to 10 items of the 11-item STARD for Abstracts checklist and explored variability in reporting across abstract characteristics using multivariable Poisson modeling. Results Most of the 278 abstracts (75%) were published in discipline-specific journals, with a median impact factor of 2.9 (IQR: 1.9-3.7). The majority (41%) of abstracts reported on imaging tests. Overall, a mean of 5.4/10 (SD: 1.4) STARD for Abstracts items was reported (range: 1.2-9.7). Items reported in less than one-third of abstracts included ‘eligible patient demographics’ (24%), ‘setting of recruitment’ (30%), ‘method of enrolment’ (18%), ‘estimates of precision for accuracy measures’ (26%), and ‘protocol registration details’ (4%). We observed substantial variability in reporting across several abstract characteristics, with higher adherence associated with the use of a structured abstract, no journal limit for abstract word count, abstract word count above the median, one-gate enrolment design, and prospective data collection. There was no evidence of an increase in the number of reported items between 2012 and 2019 (5.2 vs. 5.5 items; adjusted reporting ratio 1.04 [95%CI: 0.98–1.10]). Conclusion This sample of diagnostic accuracy abstracts revealed suboptimal reporting practices, without improvement between 2012 and 2019. The test evaluation field could benefit from targeted knowledge translation strategies to improve completeness of reporting in abstracts.
More
Translated text
Key words
Reporting guidelines,Research methods,Abstracting and indexing,Diagnostic accuracy,Sensitivity and specificity
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined