A Mixture Model for Random Responding Behavior in Forced-Choice Noncognitive Assessment: Implication and Application in Organizational Research

ORGANIZATIONAL RESEARCH METHODS(2023)

引用 1|浏览12
暂无评分
摘要
For various reasons, respondents to forced-choice assessments (typically used for noncognitive psychological constructs) may respond randomly to individual items due to indecision or globally due to disengagement. Thus, random responding is a complex source of measurement bias and threatens the reliability of forced-choice assessments, which are essential in high-stakes organizational testing scenarios, such as hiring decisions. The traditional measurement models rely heavily on nonrandom, construct-relevant responses to yield accurate parameter estimates. When survey data contain many random responses, fitting traditional models may deliver biased results, which could attenuate measurement reliability. This study presents a new forced-choice measure-based mixture item response theory model (called M-TCIR) for simultaneously modeling normal and random responses (distinguishing completely and incompletely random). The feasibility of the M-TCIR was investigated via two Monte Carlo simulation studies. In addition, one empirical dataset was analyzed to illustrate the applicability of the M-TCIR in practice. The results revealed that most model parameters were adequately recovered, and the M-TCIR was a viable alternative to model both aberrant and normal responses with high efficiency.
更多
查看译文
关键词
random responding behavior,assessment,forced-choice
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要