The rate of technological change is quickly outpacing today's methods for understanding how new advancements are applied within industrial-organizational (I-O) psychology. To further complicate matters, specific attempts to explain observed differences or measurement equivalence across devices are often atheoretical or fail to explain why a technology should (or should not) affect the measured construct. As a typical example, understanding how technology influences construct measurement in personnel testing and assessment is critical for explaining or predicting other practical issues such as accessibility, security, and scoring. Therefore, theory development is needed to guide research hypotheses, manage expectations, and address these issues at this intersection of technology and I-O psychology. This article is an extension of a Society for Industrial and Organizational Psychology (SIOP) 2016 panel session, which (re)introduces conceptual frameworks that can help explain how and why measurement equivalence or nonequivalence is observed in the context of selection and assessment. We outline three potential conceptual frameworks as candidates for further research, evaluation, and application, and argue for a similar conceptual approach for explaining how technology may influence other psychological phenomena.
Recent usage data suggest job applicants are completing online selection assessments using mobile devices (e.g., smartphones) in greater numbers. To determine the appropriateness of this new technology, this study examined the measurement equivalence of selection assessments delivered on mobile and nonmobile devices (e.g., personal computers). Measurement invariance tests conducted with multigroup confirmatory factor analysis suggest mobile versions of a cognitive ability‐type assessment, two biodata assessments, a multimedia work simulation, and a text‐based situational judgment test appear to be equivalent to nonmobile versions. However, mobile device user latent means were half a standard deviation lower than their nonmobile counterparts for the situational judgment test. Implications for mobile device usage within selection and assessment are discussed.
Applying the graded response model within the item response theory framework, the present study analyzes the psychometric properties of Karwowski’s creative self-efficacy (CSE) scale. With an ethnically diverse sample of US college students, the results suggested that the six items of the CSE scale were well fitted to a latent unidimensional structure. The scale also had adequate measurement precision or reliability, high levels of item discrimination, and an appropriate range of item difficulty. Gender-based differential item functioning analyses confirmed that there were no differences in the measurement results of the scale concerning gender. Additionally, openness to experience was found to be positively related to the CSE scale scores, providing some support for the scale’s convergent validity. Collectively, these results confirmed the psychometric soundness of the CSE scale for measuring CSE and also identified avenues for future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.