Measures of cognitive or socio-emotional skills from large-scale assessments surveys (LSAS) are often based on advanced statistical models and scoring techniques unfamiliar to applied researchers. Consequently, applied researchers working with data from LSAS may be uncertain about the assumptions and computational details of these statistical models and scoring techniques and about how to best incorporate the resulting skill measures in secondary analyses. The present paper is intended as a primer for applied researchers. After a brief introduction to the key properties of skill assessments, we give an overview over the three principal methods with which secondary analysts can incorporate skill measures from LSAS in their analyses: (1) as test scores (i.e., point estimates of individual ability), (2) through structural equation modeling (SEM), and (3) in the form of plausible values (PVs). We discuss the advantages and disadvantages of each method based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency of test scores, PVs, or measurement model parameters across different analyses and analysts). We show that although none of the methods are optimal under all criteria, methods that result in a single point estimate of each respondent’s ability (i.e., all types of “test scores”) are rarely optimal for research purposes. Instead, approaches that avoid or correct for measurement error—especially PV methodology—stand out as the method of choice. We conclude with practical recommendations for secondary analysts and data-producing organizations.
To evaluate model fit in confirmatory factor analysis, researchers compare goodness-of-fit indices (GOFs) against fixed cutoff values derived from simulation studies. However, these cutoffs may not be as broadly applicable as researchers typically assume, especially when used in settings not covered in the simulation scenarios from which these cutoffs were derived. Thus, we aim to evaluate (1) the sensitivity of GOFs to model misspecification and (2) their susceptibility to extraneous data and analysis characteristics (i.e., estimator, number of indicators, number of response options, distribution of response options, loading magnitude, sample size, and factor correlation). Our study includes the most comprehensive simulation on that matter to date. This enables us to uncover several previously unknown or at least underappreciated issues with GOFs. All widely used GOFs are far more susceptible to extraneous influences in even more complex ways than generally appreciated, and their sensitivity to misspecifications in factor loadings and factor correlations varies significantly across different scenarios. For instance, one of those strong influences on all GOFs constituted the magnitude of factor loadings (either as a main effect or two-way interaction with other characteristics). The strong susceptibility of GOFs to data and analysis characteristics showed that the practice of judging the fit of models against fixed cutoffs is more problematic than so-far assumed. Hitherto unnoticed effects on GOFs imply that no general cutoff rules can be applied to evaluate model fit. We discuss alternatives for assessing model fit and develop a new approach to tailor cutoffs for GOFs to research settings.
The Optimism–Pessimism Short Scale–2 (SOP2) described in this article measures the psychological disposition of optimism with two items. SOP2 is the English-language adaptation of an originally for the German language developed scale. Because an empirical validation of this English-language SOP2 was hitherto lacking, the aim of the present study was to assess the psychometric properties (objectivity, reliability, validity) of the English-language adaptation and to investigate measurement invariance across both language versions using heterogeneous quota samples from the UK and Germany. Our results show that the English-language adaptation has satisfactory reliability coefficients and is correlated with 10 external variables in the study (e.g., self-esteem, Emotional Stability, life satisfaction). Moreover, scalar measurement invariance of the scale holds when comparing the UK and Germany, implying the comparability of latent (co)variances and latent means across the two nations. As an ultra-short scale with a completion time of < 20 s, SOP2 lends itself particularly to the assessment of dispositional optimism in survey contexts in which assessment time or questionnaire space are limited. It can be applied in a variety of research disciplines, such as psychology, sociology, or economics.
Researchers commonly evaluate the fit of latent-variable models by comparing canonical fit indices (χ2, CFI, RMSEA, SRMR) against fixed cutoffs derived from simulation studies. However, the performance of fit indices varies greatly across empirical settings, and fit indices are susceptible to extraneous influences other than model misspecification. This threatens the validity of model judgments using fixed cutoffs. As a solution, methodologists have proposed four principal approaches to tailor cutoffs and the set of fit indices to the specific empirical setting at hand, which we review here. Extending this line of research, we then introduce a refined approach that allows (1) generating tailored cutoffs while also (2) identifying well-performing fit indices in the given scenario. Our so-called simulation-cum-ROC approach combines a Monte Carlo simulation with receiver operating characteristic (ROC) analysis. The Monte Carlo simulation generates distributions of fit indices under different assumptions about the population model that may have generated the data. ROC analysis helps evaluate the performance of fit indices in terms of their ability to discriminate between correctly specified and misspecified analysis models and allows selecting well-performing ones. It further identifies cutoffs for these fit indices that minimize Type I and Type II errors. The simulation-cum-ROC approach provides an alternative to fixed cutoffs, allows for more valid decisions about accepting or rejecting a model, and improves prior approaches to tailored cutoffs. We provide a shiny app that makes the application of our approach easy.
The Internal–External Locus of Control Short Scale–4 (IE-4) measures two dimensions of the personality trait locus of control with two items each. IE-4 was originally developed and validated in German and later translated into English. In the present study, we assessed the psychometric properties (i.e., objectivity, reliability, validity) of the English-language IE-4, compared these psychometric properties with those of the German-language source version, and tested measurement invariance across both language versions. Using heterogeneous quota samples from the UK and Germany, we find that the English-language adaptation has satisfactory reliability and plausible correlations with 11 external variables (e.g., general self-efficacy, self-esteem, impulsive behavior, Emotional Stability), which are comparable with those of the German-language source version. Moreover, metric measurement invariance of the scale holds when comparing the UK and Germany, implying the comparability of correlations based on the latent factors across the two nations. As an ultra-short scale (completion time < 30 s), IE-4 lends itself particularly to the assessment of locus of control in survey contexts in which assessment time or questionnaire space are limited. It can be applied in a variety of research disciplines, such as psychology, sociology, or economics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.