2020
DOI: 10.1101/2020.12.19.423616
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Predicting speech from a cortical hierarchy of event-based timescales

Abstract: How can anticipatory neural processes structure the temporal unfolding of context in our natural environment? We here provide evidence for a neural coding scheme that sparsely updates contextual representations at the boundary of events and gives rise to a hierarchical, multi-layered organization of predictive language comprehension. Training artificial neural networks to predict the next word in a story at five stacked timescales and then using model-based functional MRI, we observe a sparse, event-based “sur… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 145 publications
(126 reference statements)
0
5
0
Order By: Relevance
“…Prior work using language models to predict the N400 and other indices of context-based facilitation usually assumed a logarithmic relationship, and thus the majority of work used surprisal (the negative logarithm of probability) as the predictor (e.g., Boston et al, 2008;Brennan et al, 2016;Demberg & Keller, 2008;Frank et al, 2015;Frank & Bod, 2011;Heilbron et al, 2021;Hu et al, 2020;Merkx & Frank, 2020;Monsalve et al, 2012;Schmitt et al, 2020;Shain et al, 2020;van Schijndel & Linzen, 2018;Wilmot & Keller, 2020). Moreover, some of these studies either presented data that were in line with the assumption of logarithmic relationship (Goodkind & Bicknell, 2018;Wilcox et al, 2020) or explicitly compared the fit of logarithmic and linear models to show the superiority of the former (with probabilities estimated using a 3-gram model: Smith & Levy, 2013; or a 5-gram model: Yan & Jaeger, 2020).…”
Section: Context-based Facilitation Of Semantic Access Follows Both Logarithmic and Linear Functions Of Stimulus Probabilitymentioning
confidence: 99%
“…Prior work using language models to predict the N400 and other indices of context-based facilitation usually assumed a logarithmic relationship, and thus the majority of work used surprisal (the negative logarithm of probability) as the predictor (e.g., Boston et al, 2008;Brennan et al, 2016;Demberg & Keller, 2008;Frank et al, 2015;Frank & Bod, 2011;Heilbron et al, 2021;Hu et al, 2020;Merkx & Frank, 2020;Monsalve et al, 2012;Schmitt et al, 2020;Shain et al, 2020;van Schijndel & Linzen, 2018;Wilmot & Keller, 2020). Moreover, some of these studies either presented data that were in line with the assumption of logarithmic relationship (Goodkind & Bicknell, 2018;Wilcox et al, 2020) or explicitly compared the fit of logarithmic and linear models to show the superiority of the former (with probabilities estimated using a 3-gram model: Smith & Levy, 2013; or a 5-gram model: Yan & Jaeger, 2020).…”
Section: Context-based Facilitation Of Semantic Access Follows Both Logarithmic and Linear Functions Of Stimulus Probabilitymentioning
confidence: 99%
“…For instance, spoken words are recognized more quickly when they are heard in a meaningful context ( Marslen-Wilson and Tyler, 1975 ), and words that are made more likely by the context are associated with reduced neural responses, compared to less expected words ( Holcomb and Neville, 2013 ; Connolly and Phillips, 1994 ; Van Petten et al, 1999 ; Diaz and Swaab, 2007 ; Broderick et al, 2018 ). This contextual facilitation is pervasive, and is sensitive to language statistics ( Willems et al, 2016 ; Weissbart et al, 2020 ; Schmitt et al, 2020 ), as well as the discourse level meaning of the language input for the listeners ( van Berkum et al, 2003 ; Nieuwland and Van Berkum, 2006 ).…”
Section: Introductionmentioning
confidence: 99%
“…Prior work using language models to predict the N400 and other indices of context-based facilitation usually assumed a logarithmic relationship, and thus the majority of work used surprisal as the predictor (e.g., Boston et al, 2008 ; Brennan et al, 2016 ; Demberg & Keller, 2008 ; Frank et al, 2015 ; Frank & Bod, 2011 ; Heilbron et al, 2021 ; Hu et al, 2020 ; Merkx & Frank, 2020 ; Monsalve et al, 2012 ; Schmitt et al, 2020 ; Shain et al, 2020 ; van Schijndel & Linzen, 2018 ; Wilmot & Keller, 2020 ). Moreover, some of these studies either presented data that were in line with the assumption of logarithmic relationship ( Goodkind & Bicknell, 2018 ; Wilcox et al, 2020 ) or explicitly compared the fit of logarithmic and linear models to show the superiority of the former (with probabilities estimated using a 3-gram model: Smith & Levy, 2013 ; or a 5-gram model: Yan & Jaeger, 2020 ).…”
Section: Introductionmentioning
confidence: 99%