Chrome Extension
WeChat Mini Program
Use on ChatGLM

Using multi-rater and test-retest data to detect overlap within and between psychological scales

crossref(2024)

Cited 0|Views3
No score
Abstract
Correlations estimated in single-source data are poorly interpretable and mask item-level overlaps between scales. We describe how correlations can be adjusted for errors and biases using test-retest and multi-rater data and compare adjusted correlations among individual items with their human-rated semantic similarity (SS). Adjusted correlations were expected to rank item pairs similarly to SS but exceed their values unless items were semantically identical. While unadjusted and adjusted correlations predicted items’ SS rankings equally well across all items, adjusted correlations were superior at high levels of similarity. Especially agreement-adjusted correlations were usually higher than SS, whereas unadjusted correlations often underestimated SS. We discuss uses of test-retest and multi-rater data for identifying construct redundancy and argue SS often underestimates variables' empirical overlap.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined