In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI), datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity, and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given.
Many neuropsychologists are of the opinion that the multitude of cognitive tests may be grouped into a much smaller number of cognitive domains. However, there is little consensus on how many domains exist, what these domains are, nor on which cognitive tests belong to which domain. This incertitude can be solved by factor analysis, provided that the analysis includes a broad range of cognitive tests that have been administered to a very large number of people. In this article, two such factor analyses were performed, each combining multiple studies. However, because it was not possible to obtain complete multivariate data on more than the most common test variables in the field, not all possible domains were examined here. The first analysis was a factor meta-analysis of correlation matrices combining data of 60,398 healthy participants from 52 studies. Several models from the literature were fitted, of which a version based on the Cattell-Horn-Carroll (CHC) model was found to describe the correlations better than the others. The second analysis was a factor analysis of the Advanced Neuropsychological Diagnostics Infrastructure (ANDI) database, combining scores of 11,881 participants from 54 Dutch and Belgian studies not included in the first meta-analysis. Again, the model fit was better for the CHC model than for other models. Therefore, we conclude that the CHC model best characterizes both cognitive domains and which test belongs to each domain. Therefore, although originally developed in the intelligence literature, the CHC model deserves more attention in neuropsychology. Keywords Factor analysis. Meta-analysis of individual participant data. Meta-analytic SEM. Clinical neuropsychology. Cognitive functioning. Neuropsychological tests. Cattell-horn-Carroll model Electronic supplementary material The online version of this article (
(1) Background: There is a need for a brief assessment of cognitive function, both in patient care and scientific research, for which the Montreal Cognitive Assessment (MoCA) is a psychometrically reliable and valid tool. However, fine-grained normative data allowing for adjustment for age, education, and/or sex are lacking, especially for its Memory Index Score (MIS). (2) Methods: A total of 820 healthy individuals aged 18–91 (366 men) completed the Dutch MoCA (version 7.1), of whom 182 also completed the cued recall and recognition memory subtests enabling calculation of the MIS. Regression-based normative data were computed for the MoCA Total Score and MIS, following the data-handling procedure of the Advanced Neuropsychological Diagnostics Infrastructure (ANDI). (3) Results: Age, education level, and sex were significant predictors of the MoCA Total Score (Conditional R2 = 0.4, Marginal R2 = 0.12, restricted maximum likelihood (REML) criterion at convergence: 3470.1) and MIS (Marginal R2 = 0.14, REML criterion at convergence: 682.8). Percentile distributions are presented that allow for age, education and sex adjustment for the MoCA Total Score and the MIS. (4) Conclusions: We present normative data covering the full adult life span that can be used for the screening for overall cognitive deficits and memory impairment, not only in older people with or people at risk of neurodegenerative disease, but also in younger individuals with acquired brain injury, neurological disease, or non-neurological medical conditions.
Neuropsychologists administer neuropsychological tests to decide whether a patient is cognitively impaired. This clinical decision is made by comparing a patient's scores to those of healthy participants in a normative sample. In a multivariate normative comparison, a patient's entire profile of scores is compared to scores in a normative sample. Such a multivariate comparison has been shown to improve clinical decision making. However, it requires a multivariate normative data set, which often is unavailable. To obtain such a multivariate normative data set, the authors propose to aggregate healthy control group data from existing neuropsychological studies. As not all studies administered the same tests, this aggregated database will contain substantial amounts of missing data. The authors therefore propose two solutions: multiple imputation and factor modeling. Simulation studies show that factor modeling is preferred over multiple imputation, provided that the factor model is adequately specified. This factor modeling approach will therefore allow routine use of multivariate normative comparisons, enabling more accurate clinical decision making. (PsycINFO Database Record
In clinical neuropsychology, it is often necessary to estimate a patient's premorbid level of cognitive functioning in order to evaluate whether his scores on cognitive tests should be considered abnormal. In practice, test results from before the onset of brain pathology are rarely available, and the patient's level of education is used instead as an estimate of his premorbid level. Unfortunately, level of education may be expressed on many different scales of education, which are difficult to use interchangeably. Here, we introduce a new scale that has the capacity to replace existing scales and can be used interchangeably with any of them: the Universal Scale of Intelligence Estimates (USIE). To achieve this, we propose to map all levels of existing educational scales to standard IQ scores. This USIE point estimate is supplemented with an estimation interval. We assert that USIE offers some important benefits for clinical practice and research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.