2017
DOI: 10.1016/j.neubiorev.2017.09.001
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic language models in cognitive neuroscience: Promises and pitfalls

Abstract: A R T I C L E I N F O A B S T R A C TCognitive neuroscientists of language comprehension study how neural computations relate to cognitive computations during comprehension. On the cognitive part of the equation, it is important that the computations and processing complexity are explicitly defined. Probabilistic language models can be used to give a computationally explicit account of language complexity during comprehension. Whereas such models have so far predominantly been evaluated against behavioral data… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
32
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
10

Relationship

2
8

Authors

Journals

citations
Cited by 46 publications
(33 citation statements)
references
References 89 publications
(130 reference statements)
1
32
0
Order By: Relevance
“…However, effects of word-level statistics are (nearly) unavoidable: Word frequencies are for a large part responsible for word recognition times (Gardner, Rothkopf, Lapan, & Lafferty, 1987; among many others) and word reading times closely follow word probability conditional on the sentence context (Smith & Levy, 2013). Similar word-probability effects have been observed in neural activity during sentence or text comprehension (Brennan, Stabler, Van Wagenen, Luh, & Hale, 2016;Frank, Otten, Galli, & Vigliocco, 2015;Nelson et al, 2017;Willems, Frank, Nijhof, Hagoort, & Van den Bosch, 2016 for review, see Armeni, Willems, & Frank, 2017). Such probabilistic (i.e.…”
Section: The Role Of Statistics In Language Processingmentioning
confidence: 83%
“…However, effects of word-level statistics are (nearly) unavoidable: Word frequencies are for a large part responsible for word recognition times (Gardner, Rothkopf, Lapan, & Lafferty, 1987; among many others) and word reading times closely follow word probability conditional on the sentence context (Smith & Levy, 2013). Similar word-probability effects have been observed in neural activity during sentence or text comprehension (Brennan, Stabler, Van Wagenen, Luh, & Hale, 2016;Frank, Otten, Galli, & Vigliocco, 2015;Nelson et al, 2017;Willems, Frank, Nijhof, Hagoort, & Van den Bosch, 2016 for review, see Armeni, Willems, & Frank, 2017). Such probabilistic (i.e.…”
Section: The Role Of Statistics In Language Processingmentioning
confidence: 83%
“…Importantly, this study serves to link information complexity metrics to mechanistic accounts of cognition in a more naturalistic way. Metrics such as surprisal and entropy reduction are useful in that they can be generated word‐by‐word according to the specifications of a language model, which can then be related to behavioral or neural processing data to assess the viability of that model (Armeni, Willems, & Frank, ; Brennan, ; Hale, ). As noted above, though, these metrics tend to be estimated using computational parsers or other statistical language models.…”
Section: Discussionmentioning
confidence: 99%
“…Building on the work of Hale (2001) and Levy (2008), who proposed mappings from natural language processing methods (parsers and Shannon entropy), the use of NLP models as regressors has expanded in fMRI research (cf. Brennan, Stabler, Van Wagenen, Luh, & Hale, 2016) and is slowly making the transition towards M/EEG (see Armeni, Willems, & Frank, 2017;Brennan, 2016 for review, as well as van Schijndel, Murphy, & Schuler, 2015 for an early attempt in the time-frequency domain with only three participants).…”
Section: First Steps: Probes and Parsersmentioning
confidence: 99%