Abstract■ Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word "grasp" elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content.
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magnetoencephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in borndeaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.cross-modal plasticity | deafness | modularity | ventral stream | identity processing T he human brain is endowed with the fundamental ability to adapt its neural circuits in response to experience. Sensory deprivation has long been championed as a model to test how experience interacts with intrinsic constraints to shape functional brain organization. In particular, decades of neuroscientific research have gathered compelling evidence that blindness and deafness are associated with cross-modal recruitment of the sensory-deprived cortices (1). For instance, in early deaf individuals, visual and tactile stimuli induce responses in regions of the cerebral cortex that are sensitive primarily to sounds in the typical hearing brain (2, 3).Animal models of congenital and early deafness suggest that specific visual functions are relocated to discrete regions of the reorganized cortex and that this functional preference in crossmodal recruitment supports superior visual performance. For instance, superior visual motion detection is selectively altered in deaf cats when a portion of the dorsal auditory cortex, specialized for auditory motion processing in the hearing cat, is transiently deactivated (4). These results suggest that cross-modal plasticity associated with early auditory deprivation follows organizational principles that maintain the functional specialization of the colonized brain regions. In humans, however, there is only limited evidence that specific nonauditory inputs are differentially localized to discrete portions of the auditory-deprived cortices. For example, Bola et al. have recently reported, in deaf individuals, cross-modal activations for visual rhythm discrimination in t...
Human communication relies on the ability to process linguistic structure and to map words and utterances onto our environment. Furthermore, as what we communicate is often not directly encoded in our language (e.g. in the case of irony, jokes or indirect requests), we need to extract additional cues to infer the beliefs and desires of our conversational partners. Although the functional interplay between language and the ability to mentalise has been discussed in theoretical accounts in the past, the neurobiological underpinnings of these dynamics are currently not well understood. Here, we address this issue using functional imaging (fMRI). Participants listened to question-reply dialogues. In these dialogues, a reply is interpreted as a direct reply, an indirect reply or a request for action, depending on the question. We show that inferring meaning from indirect replies engages parts of the mentalising network (mPFC) while requests for action also activate the cortical motor system (IPL). Subsequent connectivity analysis using Dynamic Causal Modelling (DCM) revealed that this pattern of activation is best explained by an increase in effective connectivity from the mentalising network (mPFC) to the action system (IPL). These results are an important step towards a more integrative understanding of the neurobiological basis of indirect speech processing.
The occipital cortex of early blind individuals (EB) activates during speech processing, challenging the notion of a hard-wired neurobiology of language. But, at what stage of speech processing do occipital regions participate in EB? Here we demonstrate that parieto-occipital regions in EB enhance their synchronization to acoustic fluctuations in human speech in the theta-range (corresponding to syllabic rate), irrespective of speech intelligibility. Crucially, enhanced synchronization to the intelligibility of speech was selectively observed in primary visual cortex in EB, suggesting that this region is at the interface between speech perception and comprehension. Moreover, EB showed overall enhanced functional connectivity between temporal and occipital cortices that are sensitive to speech intelligibility and altered directionality when compared to the sighted group. These findings suggest that the occipital cortex of the blind adopts an architecture that allows the tracking of speech material, and therefore does not fully abstract from the reorganized sensory inputs it receives.
Research from the previous decade suggests that word meaning is partially stored in distributed modality-specific cortical networks. However, little is known about the mechanisms by which semantic content from multiple modalities is integrated into a coherent multisensory representation. Therefore we aimed to characterize differences between integration of lexical-semantic information from a single modality compared with two sensory modalities. We used magnetoencephalography in humans to investigate changes in oscillatory neuronal activity while participants verified two features for a given target word (e.g., "bus"). Feature pairs consisted of either two features from the same modality (visual: "red," "big") or different modalities (auditory and visual: "red," "loud"). The results suggest that integrating modality-specific features of the target word is associated with enhanced high-frequency power (80 -120 Hz), while integrating features from different modalities is associated with a sustained increase in low-frequency power (2-8 Hz). Source reconstruction revealed a peak in the anterior temporal lobe for low-frequency and high-frequency effects. These results suggest that integrating lexical-semantic knowledge at different cortical scales is reflected in frequency-specific oscillatory neuronal activity in unisensory and multisensory association networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.