Learning to associate auditory information of speech sounds with visual information of letters is a first and critical step for becoming a skilled reader in alphabetic languages. Nevertheless, it remains largely unknown which brain areas subserve the learning and automation of such associations. Here, we employ functional magnetic resonance imaging to study letter-speech sound integration in children with and without developmental dyslexia. The results demonstrate that dyslexic children show reduced neural integration of letters and speech sounds in the planum temporale/Heschl sulcus and the superior temporal sulcus. While cortical responses to speech sounds in fluent readers were modulated by letter-speech sound congruency with strong suppression effects for incongruent letters, no such modulation was observed in the dyslexic readers. Whole-brain analyses of unisensory visual and auditory group differences additionally revealed reduced unisensory responses to letters in the fusiform gyrus in dyslexic children, as well as reduced activity for processing speech sounds in the anterior superior temporal gyrus, planum temporale/Heschl sulcus and superior temporal sulcus. Importantly, the neural integration of letters and speech sounds in the planum temporale/Heschl sulcus and the neural response to letters in the fusiform gyrus explained almost 40% of the variance in individual reading performance. These findings indicate that an interrelated network of visual, auditory and heteromodal brain areas contributes to the skilled use of letter-speech sound associations necessary for learning to read. By extending similar findings in adults, the data furthermore argue against the notion that reduced neural integration of letters and speech sounds in dyslexia reflect the consequence of a lifetime of reading struggle. Instead, they support the view that letter-speech sound integration is an emergent property of learning to read that develops inadequately in dyslexic readers, presumably as a result of a deviant interactive specialization of neural systems for processing auditory and visual linguistic inputs.
Human communication entirely depends on the functional integrity of the neuromuscular system. This is devastatingly illustrated in clinical conditions such as the so-called locked-in syndrome (LIS), in which severely motor-disabled patients become incapable to communicate naturally--while being fully conscious and awake. For the last 20 years, research on motor-independent communication has focused on developing brain-computer interfaces (BCIs) implementing neuroelectric signals for communication (e.g., [2-7]), and BCIs based on electroencephalography (EEG) have already been applied successfully to concerned patients. However, not all patients achieve proficiency in EEG-based BCI control. Thus, more recently, hemodynamic brain signals have also been explored for BCI purposes. Here, we introduce the first spelling device based on fMRI. By exploiting spatiotemporal characteristics of hemodynamic responses, evoked by performing differently timed mental imagery tasks, our novel letter encoding technique allows translating any freely chosen answer (letter-by-letter) into reliable and differentiable single-trial fMRI signals. Most importantly, automated letter decoding in real time enables back-and-forth communication within a single scanning session. Because the suggested spelling device requires only little effort and pretraining, it is immediately operational and possesses high potential for clinical applications, both in terms of diagnostics and establishing short-term communication with nonresponsive and severely motor-impaired patients.
Abstract:The term 'locked-in' syndrome (LIS) describes a medical condition in which persons concerned are severely paralyzed and at the same time fully conscious and awake. The resulting anarthria makes it impossible for these patients to naturally communicate, which results in diagnostic as well as serious practical and ethical problems. Therefore, developing alternative, muscle-independent communication means is of prime importance. Such communication means can be realized via brain-computer interfaces (BCIs) circumventing the muscular system by using brain signals associated with preserved cognitive, sensory, and emotional brain functions. Primarily, BCIs based on electrophysiological measures have been developed and applied with remarkable success. Recently, also blood flow-based neuroimaging methods, such as functional magnetic resonance imaging (fMRI) and functional near-infrared spectroscopy (fNIRS), have been explored in this context.After reviewing recent literature on the development of especially hemodynamically based BCIs, we introduce a highly reliable and easy-to-apply communication procedure that enables untrained participants to motor-independently and relatively effortlessly answer multiple-choice questions based on intentionally generated single-trial fMRI signals that can be decoded online. Our technique takes advantage of the participants' capability to voluntarily influence certain spatio-temporal aspects of the blood oxygenation level-dependent (BOLD) signal: source location (by using different mental tasks), signal onset and offset. We show that healthy participants are capable of hemodynamically encoding at least four distinct information units on a single-trial level without extensive pretraining and with little effort. Moreover, realtime data analysis based on simple multi-filter correlations allows for automated answer decoding with a high accuracy (94.9%) demonstrating the robustness of the presented method. Following our 'proof of concept', the next step will involve clinical trials with LIS patients, undertaken in close collaboration with their relatives and caretakers in order to elaborate individually tailored communication protocols.� Corresponding author. As our procedure can be easily transferred to MRI-equipped clinical sites, it may constitute a simple and effective possibility for online detection of residual consciousness and for LIS patients to communicate basic thoughts and needs in case no other alternative communication means are available (yet) -especially in the acute phase of the LIS. Future research may focus on further increasing the efficiency and accuracy of fMRI-based BCIs by implementing sophisticated data analysis methods (e.g., multivariate and independent component analysis) and neurofeedback training techniques. Finally, the presented BCI approach could be transferred to portable fNIRS systems as only this would enable hemodynamically based communication in daily life situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.