The ability of perspective taking is a fundamental aspect of social cognition. The ability to decide, what another person can or cannot see is referred to as "level 1 perspective taking." This is thought to be a process that we can make use of intentionally, but which also takes place spontaneously. Autism is characterized by impairments of social interaction, which are thought to be related to deficits in implicit rather than explicit perspective taking. In order to assess both levels of processing with regard to perspective taking, we employed an established task in patients and controls. Our results demonstrate that both groups engage in spontaneous level 1 perspective taking. In contrast to controls, however, patients reacted more slowly if they had to verify the other's as compared to their own perspective, which shows that participants with high-functioning autism have selective difficulties in explicit, but not implicit, level 1 perspective taking. These findings demonstrate that while spontaneous level 1 perspective taking appears to be intact in autism, this ability is impaired in patients when used explicitly.
Aphasia, the impairment to understand or produce language, is a frequent disorder after stroke with devastating effects. Conventional speech and language therapy includes each formal intervention for improving language and communication abilities. In the chronic stage after stroke, it is effective compared to no treatment, but its effect size is small. We present a new language training approach for the rehabilitation of patients with aphasia based on a brain-computer interface system. The approach exploits its capacity to provide feedback time-locked to a brain state. Thus, it implements the idea that reinforcing an appropriate language processing strategy may induce beneficial brain plasticity.
In our approach, patients perform a simple auditory target word detection task while their EEG was recorded. The constant decoding of these signals by machine learning models generates an individual and immediate brain-state dependent feedback. It indicates to patients how well they accomplish the task during a training session, even if they are unable to speak.
Results obtained from a proof-of-concept study with 10 stroke patients with mild to severe chronic aphasia (age range: 38–76 years) are remarkable. First, we found that the high intensity training (30 hours, four days per week) was feasible, despite a high word presentation speed and unfavorable stroke-induced EEG signal characteristics. Second, the training induced a sustained recovery of aphasia, which generalized to multiple language aspects beyond the trained task. Specifically, all tested language assessments (Aachen Aphasia Test, Snodgrass & Vanderwart, Communication Activity Log) showed significant medium to large improvements between pre- and post training, with a standardized mean difference of 0.63 obtained for the Aachen Aphasia Test, and five patients categorized as non-aphasic at post-training assessment. Third, our data shows that these language improvements were accompanied neither by significant changes in attention skills nor non-linguistic skills. Investigating possible modes of action of this brain-computer interface-based language training, neuroimaging data (EEG and resting-state functional MRI) indicates a training-induced faster word processing, a strengthened language network and a re-balancing between the language- and default mode networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.