The Wisconsin Card Sorting Test (WCST) is a popular neurocognitive task used to assess cognitive flexibility, and aspects of executive functioning more broadly, in research and clinical practice. Despite its widespread use and the development of an updated WCST manual in 1993, confusion remains in the literature about how to score the WCST, and importantly, how to interpret the outcome variables as indicators of cognitive flexibility. This critical review provides an overview of the changes in the WCST, how existing scoring methods of the task differ, the key terminology and how these relate to the assessment of cognitive flexibility, and issues with the use of the WCST across the literature. In particular, this review focuses on the confusion between the terms 'perseverative responses' and 'perseverative errors' and the inconsistent scoring of these variables. To our knowledge, this critical review is the first of its kind to focus on the inherent issues surrounding the WCST when used as an assessment of cognitive flexibility. We provide recommendations to overcome these and other issues when using the WCST in future research and clinical practice.
Predictive coding provides a compelling, unified theory of neural information processing, including for language. However, there is insufficient understanding of how predictive models adapt to changing contextual and environmental demands and the extent to which such adaptive processes differ between individuals. Here, we used electroencephalography (EEG) to track prediction error responses during a naturalistic language processing paradigm. In Experiment 1, 45 native speakers of English listened to a series of short passages. Via a speaker manipulation, we introduced changing intra-experimental adjective order probabilities for two-adjective noun phrases embedded within the passages and investigated whether prediction error responses adapt to reflect these intra-experimental predictive contingencies. To this end, we calculated a novel measure of speaker-based, intra-experimental surprisal (“speaker-based surprisal”) as defined on a trial-by-trial basis and by clustering together adjectives with a similar meaning. N400 amplitude at the position of the critical second adjective was used as an outcome measure of prediction error. Results showed that N400 responses attuned to speaker-based surprisal over the course of the experiment, thus indicating that listeners rapidly adapt their predictive models to reflect local environmental contingencies (here: the probability of one type of adjective following another when uttered by a particular speaker). Strikingly, this occurs in spite of the wealth of prior linguistic experience that participants bring to the laboratory. Model adaptation effects were strongest for participants with a steep aperiodic (1/f) slope in resting EEG and low individual alpha frequency (IAF), with idea density (ID) showing a more complex pattern. These results were replicated in a separate sample of 40 participants in Experiment 2, which employed a highly similar design to Experiment 1. Overall, our results suggest that individuals with a steep aperiodic slope adapt their predictive models most strongly to context-specific probabilistic information. Steep aperiodic slope is thought to reflect low neural noise, which in turn may be associated with higher neural gain control and better cognitive control. Individuals with a steep aperiodic slope may thus be able to more effectively and dynamically reconfigure their prediction-related neural networks to meet current task demands. We conclude that predictive mechanisms in language are highly malleable and dynamic, reflecting both the affordances of the present environment as well as intrinsic information processing capabilities of the individual.
The capacity to regulate one’s attention in accordance with fluctuating task demands and environmental contexts is an essential feature of adaptive behavior. Although the electrophysiological correlates of attentional processing have been extensively studied in the laboratory, relatively little is known about the way they unfold under more variable, ecologically-valid conditions. Accordingly, this study employed a ‘real-world’ EEG design to investigate how attentional processing varies under increasing cognitive, motor, and environmental demands. Forty-four participants were exposed to an auditory oddball task while (1) sitting in a quiet room inside the lab, (2) walking around a sports field, and (3) wayfinding across a university campus. In each condition, participants were instructed to either count or ignore oddball stimuli. While behavioral performance was similar across the lab and field conditions, oddball count accuracy was significantly reduced in the campus condition. Moreover, event-related potential components (mismatch negativity and P3) elicited in both ‘real-world’ settings differed significantly from those obtained under laboratory conditions. These findings demonstrate the impact of environmental factors on attentional processing during simultaneously-performed motor and cognitive tasks, highlighting the value of incorporating dynamic and unpredictable contexts within naturalistic designs.
Objective: Cognitive flexibility has been previously described as the ability to adjust cognitive and behavioral strategies in response to changing contextual demands. Cognitive flexibility is typically assessed via self-report questionnaires and performance on neuropsychological tests in research and clinical practice. A common assumption among researchers and clinicians is that self-report and neuropsychological tests of cognitive flexibility assess the same or similar constructs, but the extent of the relationship between these two assessment approaches in clinical cohorts remains unknown. We undertook a systematic review and meta-analysis to determine the relationship between self-report and neuropsychological tests of cognitive flexibility in clinical samples. Method: We searched 10 databases and relevant gray literature (e.g., other databases and pearling) from inception to October 2020 and used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses reporting guidelines. Eleven articles including 405 participants satisfied our eligibility criteria. Results: A multilevel random-effects meta-analysis revealed no relationship between self-report and neuropsychological tests of cognitive flexibility (0.01, 95% CI [−0.16 to 0.18]). Individual random-effects meta-analyses between 12 different tests pairs also found no relationship. Conclusion: Based on our results, it is clear that the two assessment approaches of cognitive flexibility provide independent information—they do not assess the same construct. These findings have important ramifications for future research and clinical practice—there is a need to reconsider what constructs self-report and neuropsychological tests of “cognitive flexibility” actually assess, and avoid the interchangeable use of these assessments in clinical samples.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.