Abstract:The factor structure of mental abilities has most often been depicted using a higher-order model. Under this model, general mental ability (g) is placed at the top of a pyramid, with "loading" arrows going from it to the other factors of intelligence, which in turn go to subtest scores. In contrast, under the bifactor model (also known as the nested factors/direct hierarchical model), each subtest score has its own direct loading on g; the non-g factors (e.g., the broad abilities) do not mediate the relationships of the subtest scores with g. Here we summarized past research that compared the fit of higher-order and bifactor models using confirmatory factor analysis (CFA). We also analyzed additional archival datasets to compare the fit of the two models. Using a total database consisting of 31 test batteries, 58 datasets, and 1,712,509 test takers, we found stronger support for a bifactor model of g than for the traditional higher-order model. Across 166 comparisons, the bifactor model had median increases of 0.076 for the Comparative Fit Index (CFI), 0.083 for the Tucker-Lewis Index (TLI), and 0.078 for the Normed Fit Index (NFI) and decreases of 0.028 for the root mean square error of approximation (RMSEA) and 1343 for the Akaike Information Criterion (AIC). Consequently, researchers should consider using bifactor models when conducting CFAs. The bifactor model also makes the unique contributions of g and the broad abilities to subtest scores more salient to test users.
This article evaluated the validity of two traditional traitedness measures (i.e., standard deviation traitedness measures and global, single-item traitedness measures) and three new traitedness measures (i.e., multi-item, ranking, and facet traitedness measures). Overall, there was poor construct validity for both the traditional and new measures of traitedness. Furthermore, traitedness was a spurious moderator of the relationship between openness to experience and academic performance. Finally, evidence of nonlinear personality-academic performance relationships was discovered.
One of the typical roles of industrial–organizational (I-O) psychologists working as practitioners is administering employee surveys measuring job satisfaction/engagement. Traditionally, this work has involved developing (or choosing) the items for the survey, administering the items to employees, analyzing the data, and providing stakeholders with summary results (e.g., percentages of positive responses, item means). In recent years, I-O psychologists moved into uncharted territory via the use of survey key driver analysis (SKDA), which aims to identify the most critical items in a survey for action planning purposes. Typically, this analysis involves correlating (or regressing) a self-report criterion item (e.g., “considering everything, how satisfied are you with your job”) with (or on) each of the remaining survey items in an attempt to identify which items are “driving” job satisfaction/engagement. It is also possible to use an index score (i.e., a scale score formed from several items) as the criterion instead of a single item. That the criterion measure (regardless of being a single item or an index) is internal to the survey from which predictors are drawn distinguishes this practice from linkage research. This methodology is not widely covered in survey methodology coursework, and there are few peer-reviewed articles on it. Yet, a number of practitioners are marketing this service to their clients. In this focal article, a group of practitioners with extensive applied survey research experience uncovers several methodological issues with SKDA. Data from a large multiorganizational survey are used to back up claims about these issues. One issue is that SKDA ignores the psychometric reality that item standard deviations impact which items are chosen as drivers. Another issue is that the analysis ignores the factor structure of survey item responses. Furthermore, conducting this analysis each time a survey is administered conflicts with the lack of situational and temporal specificity. Additionally, it is problematic to imply causal relationships from the correlational data seen in most surveys. Most surprisingly, randomly choosing items out of a hat yields validities similar to those from conducting the analysis. Thus, we recommend that survey providers stop conducting SKDA until they can produce science that backs up this practice. These issues, in concert with the lack of literature examining the practice, make rigorous evaluations of SKDA a timely inquiry.
Recently emerging evidence suggests that the dominant structural model of mental abilities-the (CHC) model-may not adequately account for observed scores for mental abilities batteries, leading scholars to call into question the model's validity. Establishing the robustness of these findings is important since CHC is the foundation for several contemporary mental abilities test batteries, such as the Woodcock-Johnson III (WJ-III). Using confirmatory factor analysis, we investigated CHC's robustness across 4 archival samples of mental abilities test battery data, including the WJ-III, the Kaufman Adolescent& Adult Intelligence Test (KAIT), the Kaufman Assessment Battery for Children (KABC), and the Differential Ability Scales (DAS). We computed omega hierarchical (ωH) and omega subscale (ωS) coefficients for g and the broad factors, which estimated the relationship of composite scores to g and the broad factors, respectively. Across all 4 samples, we found strong evidence for a general ability, g. We additionally found evidence for 3 to 9 residualized, orthogonal broad abilities existing independently of g, many of which also explained reliable variance in test battery scores that cannot be accounted for by g alone. The reliabilities of these broad factors, however, were less than desirable (i.e., <.80) and achieving desirable reliabilities would be practically infeasible (e.g., requiring excessively large numbers of subtests). Our results, and those of CHC critics, are wholly consistent with Carroll's model. Essentially, both g and orthogonal broad abilities are required to explain variance in mental abilities test battery scores, which is consistent with Carroll but not Cattell-Horn. (PsycINFO Database Record
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.