Executive function is an important concept in neuropsychological and cognitive research, and is often viewed as central to effective clinical assessment of cognition. However, the construct validity of executive function tests is controversial. The switching, inhibition, and updating model is the most empirically supported and replicated factor model of executive function (Miyake et al., 2000). To evaluate the relation between executive function constructs and nonexplicitly executive cognitive constructs, we used confirmatory factor reanalysis guided by the comprehensive Cattell-Horn-Carroll (CHC) model of cognitive abilities. Data from 7 of the best studies supporting the executive function model were reanalyzed, contrasting executive function models and CHC models. Where possible, we examined the effect of specifying executive function factors in addition to the CHC factors. The results suggested that little evidence is available to support updating as a separate factor from general memory factors; that inhibition does not separate from general speed; and that switching is supported as a narrow factor under general speed, but with a more restricted definition than some clinicians and researchers have conceptualized. The replicated executive function factor structure was integrated with the larger body of research on individual difference in cognition, as represented by the CHC model.
The Cattell–Horn–Carroll (CHC) model is a comprehensive model of the major dimensions of individual differences that underlie performance on cognitive tests. Studies evaluating the generality of the CHC model across test batteries, age, gender, and culture were reviewed and found to be overwhelmingly supportive. However, less research is available to evaluate the CHC model for clinical assessment. The CHC model was shown to provide good to excellent fit in nine high-quality data sets involving popular neuropsychological tests, across a range of clinically relevant populations. Executive function tests were found to be well represented by the CHC constructs, and a discrete executive function factor was found not to be necessary. The CHC model could not be simplified without significant loss of fit. The CHC model was supported as a paradigm for cognitive assessment, across both healthy and clinical populations and across both nonclinical and neuropsychological tests. The results have important implications for theoretical modeling of cognitive abilities, providing further evidence for the value of the CHC model as a basis for a common taxonomy across test batteries and across areas of assessment.
Fluency is an important construct in clinical assessment and in cognitive taxonomies. In the Cattell–Horn–Carroll (CHC) model, Fluency is represented by several narrow factors that form a subset of the long-term memory encoding and retrieval (Glr) broad factor. The CHC broad classification of Fluency was evaluated in five data sets, and the CHC narrow classification was evaluated in an additional two data sets. The results suggest that Fluency tests are more strongly related to processing speed (Gs) and acquired knowledge (Gc) than to Glr, but Fluency may also be represented as a distinct broad factor. In the two additional data sets with a large number of Fluency tests, the CHC Fluency narrow factors failed to replicate with confirmatory factor analysis. An alternative and simpler narrow structure of Fluency was found, supporting the factorial distinction of semantic versus orthographic Fluency. The results have important implications for the factorial structure of memory, the classification of Fluency tests, and the assessment of Fluency.
Mixed group validation (MGV) is a statistical model for estimating the diagnostic accuracy of tests. Unlike the more common approach to estimating criterion-related validity, known group validation (KGV), MGV does not require a perfect external validity criterion. The present article describes MGV by (a) specifying both the standard error associated with MGV validity estimates and the effect of assumption violation, (b) recommending required sample sizes under various study conditions, (c) evaluating whether assumption violation can be identified, and (d) providing a simulated example of an MGV with imperfect base rate estimates. It is concluded that MGV will always have a wider margin of error than KGV, MGV performs best when the research design approximates a KGV design, the effect of assumption violation depends on the severity of the assumption violation and also the value of the base rates, and that assumption violation may only be detected in severe cases.
Mixed Group Validation (MGV) is an approach for estimating the diagnostic accuracy of tests. MGV is a promising alternative to the more commonly used Known Groups Validation (KGV) approach for estimating diagnostic accuracy. The advantage of MGV lies in the fact that the approach does not require a perfect external validity criterion or gold standard. However, the research designs where MGV is most appropriate have not been thoroughly explored. We give a brief description of the ideal research design to minimize error for MGV studies, test whether the MGV assumptions hold with clinical data, evaluate whether there is evidence of assumption violation among published MGV studies, give a practical description of the MGV assumptions, and describe an example of an optimal use of MGV. Ultimately, we conclude that MGV is not generally superior to KGV but may be used in some cases where the assumptions and standard error have been considered appropriately.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.