Assessing creative potential using a comprehensive battery of standardized tests requires a focus on how and why an individual responds in addition to how well they respond. Using the “intelligent testing” philosophy of focusing on the person being tested rather than the measure itself helps psychologists form a more complete picture of an examinee, which may include information about his or her creative potential. Although most aspects of creativity are not present in current individually based IQ and achievement tests, one exception is divergent production. Although still poorly represented, some subtests show great potential for tapping into divergent production, and hence provide some insight into creativity. The research on the relationship between measures of intelligence and creativity is discussed in this article. The authors also propose a way to use individually administered cognitive and achievement batteries to extract information about an individual’s divergent production and general creative potential.
Preschool-age children who are experiencing delays in physical, cognitive, communication, social, emotional, or adaptive development are often referred for a comprehensive assessment to make diagnostic determinations and to help develop appropriate interventions. Typically cognitive assessment has a key role in a comprehensive evaluation of a young child. In this article, five individually administered tests of cognitive ability, normed for the preschool-age child, are reviewed. These specific tests include the Bayley Scales of Infant Development, 2nd edition, the Kaufman Assessment Battery for Children, 2nd edition, the Wechsler Preschool and Primary Scale of Intelligence, 3rd edition, the Stanford-Binet Intelligence Scale, 5th edition, and the Differential Abilities Scales. The following is provided for these cognitive instruments: a description of the test procedures, information on scoring systems, highlights of the technical qualities, and a summary of the general meaning of test results. The article concludes with strengths and limitations of the instruments.
The third edition of the Wechsler Adult Intelligence Scale manual reports four-factor solutions for the WAIS-III, and subsequent research has validated four-factor solutions for a variety of samples. These four factors consistently correspond to the four Factor Indexes that are yielded by the WAIS-III. However, the WAIS-III still provides Verbal and Performance IQs, in addition to the Indexes, making it desirable to examine two-factor solutions as well. In addition, because the Wechsler literature includes much interpretation of three-factor solutions, these solutions were likewise examined. Principal factor analysis followed by Varimax and Oblimin rotations of two and three factors were performed on data for the total WAIS-III sample ages 16 to 89 years (N=2,450). The two-factor solutions were viewed as a construct validation of Wechsler's two separate IQs, although the Working Memory subtests tended to load higher on the Performance scale than on their intended scale (Verbal); three-factor solutions were interpreted within the context of Horn's expanded fluid-crystallized theory and research on working memory. Both the two- and three-factor Varimax-rotated solutions were related to similar factor analyses conducted previously for the Wechsler Adult Intelligence Scale-Revised and the Wechsler Intelligence Scale for Children-III. Coefficients of congruence between like-named factors consistently exceeded .90, and usually .98, across different Wechsler batteries.
The process of assessment report writing is a complex one, involving both the statistical evaluation of data and clinical methods of data interpretation to appropriately answer referral questions. Today, a computer often analyzes data generated in a psychological assessment, at least in part. In this article, the author focuses on the interaction between the decision-making processes of human clinicians and the test interpretations that are computer-based. The benefits and problems with computers in assessment are highlighted and are presented alongside the research on the validity of automated assessment, as well as research comparing clinicians and computers in the decision-making process. The author concludes that clinical judgment and computer-based test interpretation each have weaknesses. However, by using certain strategies to reduce clinicians' susceptibility to errors in decision making and to ensure that only valid computer-based test interpretations are used, clinicians can optimize the accuracy of conclusions that they draw in their assessment report.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.