Detecting change in individual patients is an important goal of neuropsychological testing. However, limited information is available about test-retest changes, and well-validated prediction methods are lacking. Using a large nonclinical subject group (N = 384), we recently investigated test-retest reliabilities and practice effects on the Wechsler Adult Intelligence Scale and Halstead-Reitan Battery. Data from this group also were used to develop models for predicting follow-up test scores and establish confidence intervals around them. In this article we review those findings, examine their generalizability to new nonclinical and clinical groups, and explore the sensitivity of the prediction models to real change. Despite similarities across samples in reliability coefficients and practice effects, limits to the generalizability of prediction methods were found. Also, when multiple test measures were considered together, one or more "significant" changes were common in all (including stable) subject groups. By employing normative cut-offs that correct for this, sensitivity of the models to neurological recovery and deterioration was modest to good. More complex regression models were not more accurate than the simpler Reliable Change Index with correction for practice effects when confidence intervals for all methods were adjusted for variations in level of baseline test performance.
Regression-based norms for the Trail Making Test, Boston Naming Test, and Wisconsin Card Sorting Test, which we published in 1991 and 1993, have been criticized by Fastenau (1998) as having overcorrected for demographic influences in a sample of 63 older adults. We present data from new, independent participant samples that are consistent with expectations from the regression-based norms. We propose that Fastenau's findings in this instance resulted from the nonrepresentative nature of his relatively small sample, rather than from statistical deficiencies of regression based norms. Our currently published norms on one of the four tests considered here, the Boston Naming Test, are based upon a participant sample that was small and had inadequate representation of young adults. We address this by providing updated norms based on a much larger and more representative sample (N = 531).
A semantic differential scale was administered to 208 school children when they were in the second, fourth, sixth, eighth, tenth, and twelfth grades. Their perceptions towards two concepts were measured, Education (going to school) and Work (having a job). Each semantic differential scale had 15 adjective pairs and reflected the three underlying factors of Evaluative, Potency, and Activity. Because the study was conducted for 10 years (ages seven to 18), the changing cognitive developmental stages of the children were expected to influence factor analytic and reliability results. Confirmatory factor analysis, which forced the data into three factors, did not clearly identify the expected three factors, although more items loaded on the three factors with age. An exploratory factor analysis identified a trend across grades from six to four factors over time. Reliability also improved across age groups. Caution should be exercised when using the semantic differential with young children in investigations of abstract concepts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.