Confirmatory factor analytic (CFA) models are frequently used in many areas of organizational research. Due to their popularity, CFA models and issues about their fit have received a vast amount of attention during the past several decades. The purpose of this study was to examine several measures of fit and the appropriateness of previously developed ''rules of thumb'' for their interpretation. First, an empirical example is used to illustrate the effects of nonnormality on maximum likelihood (ML) estimation and to demonstrate the importance of diagonally weighted least squares (DWLS) estimation for organizational research. Then, the results of a simulation study are presented to show that appropriate cutoff values for DWLS estimation vary considerably across conditions. Finally, regression equations are described to aid researchers in selecting cutoff values for assessing the fit of DWLS solutions, given a desired level of Type I error. The results summarized here have important implications for the interpretation and use of CFA models.
Because of the practical, theoretical, and legal implications of differential item functioning (DIF) for organizational assessments, studies of measurement equivalence are a necessary first step before scores can be compared across individuals from different groups. However, commonly recommended criteria for evaluating results from these analyses have several important limitations. The present study proposes an effect size index for confirmatory factor analytic (CFA) studies of measurement equivalence to address 1 of these limitations. The application of this index is illustrated with personality data from American English, Greek, and Chinese samples. Results showed a range of nonequivalence across these samples, and these differences were linked to the observed effects of DIF on the outcomes of the assessment (i.e., group-level mean differences and adverse impact).
Recently, an effect size measure, known as d MACS, was developed for confirmatory factor analytic (CFA) studies of measurement equivalence. Although this index has several advantages over traditional methods of identifying nonequivalence, the scale and interpretation of this effect size are still unclear. As a result, the interpretation of the effect size is left to the subjective judgment of the researcher. To remedy this issue for other effect sizes, some have proposed guidelines for evaluating the magnitude of an effect based on the distribution of effect sizes in the literature. The goal of the current research was to develop similar guidelines for effect sizes of measurement nonequivalence and build on this work by also examining the practical importance of nonequivalence. Based on a review of past research, we conducted two simulation studies to generate distributions of effects sizes. Assuming the ideal scenario of invariant referent items, the results of these simulations were then used to develop empirical guidelines for interpreting nonequivalence and its effects on observed outcomes.
The Domain-specific Risk-taking scale was designed to assess risk taking in specific domains. This approach is unconventional in personality assessment but reflects conventional wisdom in the decision community that cross-situational consistency in risk taking is more myth than reality. We applied bifactor analysis to a large sample (n = 921) of responses to the Domain-specific Risk Taking. Results showed that, in addition to domain-specific facets, there does appear to be evidence for a general risk-taking disposition. And this general appetite for risk appears to be useful for predicting real-world outcomes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.