Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input. However, the role of prediction in language processing remains disputed, with disagreement about both the ubiquity and representational nature of predictions. Here, we address both issues by analyzing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions. First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable neural signatures of predictions about syntactic category (parts of speech), phonemes, and semantics. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.
Understanding spoken language requires transforming ambiguous stimulus streams into a hierarchy of increasingly abstract representations, ranging from speech sounds to meaning. It has been suggested that the brain uses predictive computations to guide the interpretation of incoming information. However, the exact role of prediction in language understanding remains unclear, with widespread disagreement about both the ubiquity of prediction, and the level of representation at which predictions unfold. Here, we address both issues by analysing brain recordings of participants listening to audiobooks, and using a state-of-the-art deep neural network (GPT-2) to quantify predictions in a fine-grained, contextual fashion. First, we establish clear evidence for predictive processing, confirming that brain responses to words are modulated by probabilistic predictions. Next, we factorised the model-based predictions into distinct linguistic dimensions, revealing dissociable neural signatures of syntactic, phonemic and semantic predictions. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting theories of hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, and demonstrate that linguistic prediction is not implemented by a single system but occurs throughout the language network, forming a hierarchy of linguistic predictions across all levels of analysis.
Despite the increasing availability of Open Science (OS) infrastructure and the rise in policies to change behaviour, OS practices are not yet the norm. While pioneering researchers are developing OS practices, the majority sticks to status quo. To transition to common practice, we must engage a critical proportion of the academic community. In this transition, OS Communities (OSCs) play a key role. OSCs are bottom-up learning groups of scholars that discuss OS within and across disciplines. They make OS knowledge more accessible and facilitate communication among scholars and policymakers. Over the past two years, eleven OSCs were founded at several Dutch university cities. In other countries, similar OSCs are starting up. In this article, we discuss the pivotal role OSCs play in the large-scale transition to OS. We emphasize that, despite the grassroot character of OSCs, support from universities is critical for OSCs to be viable, effective, and sustainable.
Cognitive neuroscientists of language comprehension study how neural computations relate to cognitive computations during comprehension. On the cognitive part of the equation, it is important that the computations and processing complexity are explicitly defined. Probabilistic language models can be used to give a computationally explicit account of language complexity during comprehension. Whereas such models have so far predominantly been evaluated against behavioral data, only recently have the models been used to explain neurobiological signals. Measures obtained from these models emphasize the probabilistic, information-processing view of language understanding and provide a set of tools that can be used for testing neural hypotheses about language comprehension. Here, we provide a cursory review of the theoretical foundations and example neuroimaging studies employing probabilistic language models. We highlight the advantages and potential pitfalls of this approach and indicate avenues for future research. keywords: cognitive neuroscience of language, computational linguistics, EEG, MEG, fMRI, probabilistic language models, information theory, surprisal, entropy
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.