In real-life noisy situations, we can selectively attend to conversations in the presence of irrelevant voices, but neurocognitive mechanisms in such natural listening situations remain largely unexplored. Previous research has shown distributed activity in the mid superior temporal gyrus (STG) and sulcus (STS) while listening to speech and human voices, in the posterior STS and fusiform gyrus when combining auditory, visual and linguistic information, as well as in left-hemisphere temporal and frontal cortical areas during comprehension. In the present functional magnetic resonance imaging (fMRI) study, we investigated how selective attention modulates neural responses to naturalistic audiovisual dialogues. Our healthy adult participants (N = 15) selectively attended to video-taped dialogues between a man and woman in the presence of irrelevant continuous speech in the background. We modulated the auditory quality of dialogues with noise vocoding and their visual quality by masking speech-related facial movements. Both increased auditory quality and increased visual quality were associated with bilateral activity enhancements in the STG/STS. In addition, decreased audiovisual stimulus quality elicited enhanced fronto-parietal activity, presumably reflecting increased attentional demands. Finally, attention to the dialogues, in relation to a control task where a fixation cross was attended and the dialogue ignored, yielded enhanced activity in the left planum polare, angular gyrus, the right temporal pole, as well as in the orbitofrontal/ventromedial prefrontal cortex and posterior cingulate gyrus. Our findings suggest that naturalistic conversations effectively engage participants and reveal brain networks related to social perception in addition to speech and semantic processing networks.
Speech production is an intricate process involving a large number of muscles and cognitive processes. The neural processes underlying speech production are not completely understood. As speech is a uniquely human ability, it can not be investigated in animal models. High-fidelity human data can only be obtained in clinical settings and is therefore not easily available to all researchers. Here, we provide a dataset of 10 participants reading out individual words while we measured intracranial EEG from a total of 1103 electrodes. The data, with its high temporal resolution and coverage of a large variety of cortical and sub-cortical brain regions, can help in understanding the speech production process better. Simultaneously, the data can be used to test speech decoding and synthesis approaches from neural data to develop speech Brain-Computer Interfaces and speech neuroprostheses.
Using brain activity directly as input for assistive tool control can circumvent muscular dysfunction and increase functional independence for physically impaired people. Most invasive motor decoding studies focus on decoding neural signals from the primary motor cortex, which provides a rich but superficial and spatially local signal. Initial non-primary motor cortex decoding endeavors have used distributed recordings to demonstrate decoding of motor activity by grouping electrodes in mesoscale brain regions. While these studies show that there is relevant and decodable movement related information outside the primary motor cortex, these methods are still exclusionary to other mesoscale areas, and do not capture the full informational content of the motor system. In this work, we recorded intracranial EEG of 8 epilepsy patients, including all electrode contacts except those contacts in or adjacent to the central sulcus. We show that executed and imagined movements can be decoded from non-motor areas; combining all non-motor contacts into a lower dimensional representation provides enough information for a Riemannian decoder to reach an area under the curve of 0.83 ± 0.11. Additionally, by training our decoder on executed and testing on imagined movements, we demonstrate that between these two conditions there exists shared distributed information in the beta frequency range. By combining relevant information from all areas into a lower dimensional representation, the decoder was able to achieve high decoding results without information from the primary motor cortex. This representation makes the decoder more robust to perturbations, signal non-stationarities and neural tissue degradation. Our results indicate to look beyond the motor cortex and open up the way towards more robust and more versatile brain-computer interfaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.