Human speech comprehension is remarkable for its immediacy and rapidity. The listener interprets an incrementally delivered auditory input, millisecond by millisecond as it is heard, in terms of complex multilevel representations of relevant linguistic and nonlinguistic knowledge. Central to this process are the neural computations involved in semantic combination, whereby the meanings of words are combined into more complex representations, as in the combination of a verb and its following direct object (DO) noun (e.g., “eat the apple”). These combinatorial processes form the backbone for incremental interpretation, enabling listeners to integrate the meaning of each word as it is heard into their dynamic interpretation of the current utterance. Focusing on the verb-DO noun relationship in simple spoken sentences, we applied multivariate pattern analysis and computational semantic modeling to source-localized electro/magnetoencephalographic data to map out the specific representational constraints that are constructed as each word is heard, and to determine how these constraints guide the interpretation of subsequent words in the utterance. Comparing context-independent semantic models of the DO noun with contextually constrained noun models reflecting the semantic properties of the preceding verb, we found that only the contextually constrained model showed a significant fit to the brain data. Pattern-based measures of directed connectivity across the left hemisphere language network revealed a continuous information flow among temporal, inferior frontal, and inferior parietal regions, underpinning the verb’s modification of the DO noun’s activated semantics. These results provide a plausible neural substrate for seamless real-time incremental interpretation on the observed millisecond time scales.
How is language processed in the brain by native speakers of different languages? Is there one brain system for all languages or are different languages subserved by different brain systems? The first view emphasizes commonality, whereas the second emphasizes specificity. We investigated the cortical dynamics involved in processing two very diverse languages: a tonal language (Chinese) and a nontonal language (English). We used functional MRI and dynamic causal modeling analysis to compute and compare brain network models exhaustively with all possible connections among nodes of language regions in temporal and frontal cortex and found that the information flow from the posterior to anterior portions of the temporal cortex was commonly shared by Chinese and English speakers during speech comprehension, whereas the inferior frontal gyrus received neural signals from the left posterior portion of the temporal cortex in English speakers and from the bilateral anterior portion of the temporal cortex in Chinese speakers. Our results revealed that, although speech processing is largely carried out in the common left hemisphere classical language areas (Broca's and Wernicke's areas) and anterior temporal cortex, speech comprehension across different language groups depends on how these brain regions interact with each other. Moreover, the right anterior temporal cortex, which is crucial for tone processing, is equally important as its left homolog, the left anterior temporal cortex, in modulating the cortical dynamics in tone language comprehension. The current study pinpoints the importance of the bilateral anterior temporal cortex in language comprehension that is downplayed or even ignored by popular contemporary models of speech comprehension.speech perception | tonal language | functional MRI | cortical dynamics T he brain of a newborn discriminates the various phonemic contrasts used in different languages (1) by recruiting distributed cortical regions (2); by 6-10 mo, it is preferentially tuned to the phonemes in native speech that they have been exposed to (3, 4). In adult humans, the key neural nodes that subserve speech comprehension are located in the superior temporal cortex (5, 6) and the inferior frontal cortex (7). Do these regions interact in different ways depending on the type of language that is being processed? Little is known about how information flows among these critical language nodes in native speakers of different languages.As one of the unique capacities of the human brain (8), the nature of compositional languages and their neural mechanisms have been the interests of scientific research for decades. There are more than 7,000 different spoken languages in the world today used for communication. By exploring the brain networks subserving universal properties across languages and specific differences within different languages, such research helps address the essential questions in neurolinguistics such as the constitution of knowledge of language, as well as how it is acquired (9). Although ...
The hierarchical nature of language requires human brain to internally parse connected-speech and incrementally construct abstract linguistic structures. Recent research revealed multiple neural processing timescales underlying grammar-based configuration of linguistic hierarchies. However, little is known about where in the whole cerebral cortex such temporally scaled neural processes occur. This study used novel magnetoencephalography source imaging techniques combined with a unique language stimulation paradigm to segregate cortical maps synchronized to 3 levels of linguistic units (i.e., words, phrases, and sentences). Notably, distinct ensembles of cortical loci were identified to feature structures at different levels. The superior temporal gyrus was found to be involved in processing all 3 linguistic levels while distinct ensembles of other brain regions were recruited to encode each linguistic level. Neural activities in the right motor cortex only followed the rhythm of monosyllabic words which have clear acoustic boundaries, whereas the left anterior temporal lobe and the left inferior frontal gyrus were selectively recruited in processing phrases or sentences. Our results ground a multi-timescale hierarchical neural processing of speech in neuroanatomical reality with specific sets of cortices responsible for different levels of linguistic units.
Node definition or delineating how the brain is parcellated into individual functionally related regions is the first step to accurately map the human connectome. As a result, parcellation of the human brain has drawn considerable attention in the field of neuroscience. The thalamus is known as a relay in the human brain, with its nuclei sending fibers to the cortical and subcortical regions. Functional magnetic resonance imaging techniques offer a way to parcellate the thalamus in vivo based on its connectivity properties. However, the parcellations from previous studies show that both the number and the distribution of thalamic subdivisions vary with different cortical segmentation methods. In this study, we used an unsupervised clustering method that does not rely on a priori information of the cortical segmentation to parcellate the thalamus. Instead, this approach is based on the intrinsic resting-state functional connectivity profiles of the thalamus with the whole brain. A series of cluster solutions were obtained, and an optimal solution was determined. Furthermore, the validity of our parcellation was investigated through the following: (1) identifying specific resting-state connectivity patterns of thalamic parcels with different brain networks and (2) investigating the task activation and psychophysiological interactions of specific thalamic clusters during 8-Hz flashing checkerboard stimulation with simultaneous finger tapping. Together, the current study provides a reliable parcellation of the thalamus and enhances our understating of thalamic. Furthermore, the current study provides a framework for parcellation that could be potentially extended to other subcortical and cortical regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.