Measuring object recognition ability: Reliability, validity, and the aggregate z-score approach

Behavior Research Methods(2024)

引用 0|浏览0
暂无评分
摘要
Measurement of domain-general object recognition ability (o) requires minimization of domain-specific variance. One approach is to model o as a latent variable explaining performance on a battery of tests which differ in task demands and stimuli; however, time and sample requirements may be prohibitive. Alternatively, an aggregate measure of o can be obtained by averaging z-scores across tests. Using data from Sunday et al., Journal of Experimental Psychology: General, 151, 676–694, (2022), we demonstrated that aggregate scores from just two such object recognition tests provide a good approximation (r = .79) of factor scores calculated from a model using a much larger set of tests. Some test combinations produced correlations of up to r = .87 with factor scores. We then revised these tests to reduce testing time, and developed an odd one out task, using a unique object category on nearly every trial, to increase task and stimuli diversity. To validate our measures, 163 participants completed the object recognition tests on two occasions, one month apart. Providing the first evidence that o is stable over time, our short aggregate o measure demonstrated good test–retest reliability (r = .77). The stability of o could not be completely accounted for by intelligence, perceptual speed, and early visual ability. Structural equation modeling suggested that our tests load significantly onto the same latent variable, and revealed that as a latent variable, o is highly stable (r = .93). Aggregation is an efficient method for estimating o, allowing investigation of individual differences in object recognition ability to be more accessible in future studies.
更多
查看译文
关键词
Object recognition,Individual differences,Measurement,High-level vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要