2017
DOI: 10.1101/168161
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Probabilistic language models in cognitive neuroscience: promises and pitfalls

Abstract: Cognitive neuroscientists of language comprehension study how neural computations relate to cognitive computations during comprehension. On the cognitive part of the equation, it is important that the computations and processing complexity are explicitly defined. Probabilistic language models can be used to give a computationally explicit account of language complexity during comprehension. Whereas such models have so far predominantly been evaluated against behavioral data, only recently have the models been … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
3

Relationship

3
6

Authors

Journals

citations
Cited by 21 publications
(24 citation statements)
references
References 96 publications
(118 reference statements)
0
24
0
Order By: Relevance
“…Uncertainty and probabilistic computation have received increased attention as viable paradigms for describing information processing at the cognitive (Chater,740 Tenenbaum, & Yuille, 2006) and neural levels (Hasson, 2017;Knill & Pouget, 2004 processing (see Armeni, Willems, & Frank, 2017, for discussion).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Uncertainty and probabilistic computation have received increased attention as viable paradigms for describing information processing at the cognitive (Chater,740 Tenenbaum, & Yuille, 2006) and neural levels (Hasson, 2017;Knill & Pouget, 2004 processing (see Armeni, Willems, & Frank, 2017, for discussion).…”
Section: Discussionmentioning
confidence: 99%
“…Presently, the work employing these metrics (present study included) is not addressing the question of neural codes, that is, how probabilistic knowledge and functions needed for 750 language understanding can be encoded and decoded with models of neural circuits. They serve, instead, in describing the basic phenomenon-a statistical relationship between cognitive variables and neurophysiological observableswhich in itself requires explanation (see Armeni et al, 2017;Carlson, Goddard, Kaplan, Klein, & Ritchie, 2017;Shagrir & Bechtel, 2017, for similar remarks).…”
Section: Discussionmentioning
confidence: 99%
“…The value of surprisal (S) indicates how unexpected a given word is on the bases of the preceding words 1 . In order to calculate the surprisal value associated to each word of the sentence, we used the algorithms developed by Roark 2 with a model of Probabilistic Context Free Grammar , where P ( w i ) corresponds to the probability of occurrence of the target wordand P ( w 1,2,
, i −1 ) to the probability of occurrence of the preceding words.…”
Section: Methodsmentioning
confidence: 99%
“…The value of surprisal (S) indicates how unexpected a given word is on the bases of the preceding word 27 . In order to calculate the surprisal value associated to each word of the sentence, we used the algorithms developed to the probability of occurrence of the preceding words.…”
Section: Surprisal Value Computationmentioning
confidence: 99%