2018
DOI: 10.1080/0163853x.2018.1448677
|View full text |Cite
|
Sign up to set email alerts
|

Expectation-based Comprehension: Modeling the Interaction of World Knowledge and Linguistic Experience

Abstract: The processing difficulty of each word we encounter in a sentence is affected by both our prior linguistic experience and our general knowledge about the world. Computational models of incremental language processing have, however, been limited in accounting for the influence of world knowledge. We develop an incremental model of language comprehension that constructs-on a word-by-word basis-rich, probabilistic situation model representations. To quantify linguistic processing effort, we adopt Surprisal Theory… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
96
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 51 publications
(97 citation statements)
references
References 77 publications
1
96
0
Order By: Relevance
“…Alignment is postulated to occur at all linguistic levels, up to situation models. The construction of mental models has further influenced the 'immersed experiencer model' (Zwaan and Ross 2004) (see Barsalou 1999, for a comprehensive overview of perceptual theories of cognition) and probabilistic neurocomputational models of expectation-based comprehension (Venhuizen et al 2018). Research on the interaction of language comprehension with visual attention has been continued in speech perception models (Allopenna et al 1998;Smith et al 2017), in processing accounts of situated sentence comprehension (Altmann and Kamide 2007;Huettig et al 2018;Crocker 2006, 2007), and in computational models of visual attention and situated language comprehension (Crocker et al 2010;Kukona and Tabor 2011;Mayberry et al 2009;Roy and Mukherjee 2005).…”
Section: Towards Predicting Context Effectsmentioning
confidence: 99%
“…Alignment is postulated to occur at all linguistic levels, up to situation models. The construction of mental models has further influenced the 'immersed experiencer model' (Zwaan and Ross 2004) (see Barsalou 1999, for a comprehensive overview of perceptual theories of cognition) and probabilistic neurocomputational models of expectation-based comprehension (Venhuizen et al 2018). Research on the interaction of language comprehension with visual attention has been continued in speech perception models (Allopenna et al 1998;Smith et al 2017), in processing accounts of situated sentence comprehension (Altmann and Kamide 2007;Huettig et al 2018;Crocker 2006, 2007), and in computational models of visual attention and situated language comprehension (Crocker et al 2010;Kukona and Tabor 2011;Mayberry et al 2009;Roy and Mukherjee 2005).…”
Section: Towards Predicting Context Effectsmentioning
confidence: 99%
“…Where they differ is in how they represent meaning:  Propositional structures (Brouwer, Crocker, Venhuizen, & Hoeks, 2017;Hinaut & Dominey, 2013) identify the agent, patient, and action of a given sentence, that is, they represent the semantic roles and concepts that fill those roles.  Situation vectors (Frank & Vigliocco, 2011;Venhuizen, Crocker, & Brouwer, 2019) represent the state-of-affairs in the world as described by the sentence, without any internal role-concept structure.  Sentence gestalts (Rabovsky, Hansen, & McClelland, 2018; based on a classical model by McClelland, St.John, & Taraban, 1989) are developed by the neural network itself during training.…”
Section: Rnns For Sentence Comprehensionmentioning
confidence: 99%
“…The propositional structure models by Brouwer et al (2017) and Hinaut and Dominey (2013) take this measure to correspond to the well-known P600 EEG component, 4 which is often viewed as indicative of a sentence reinterpretation process. 5 The situation vectors models by Frank and Vigliocco (2011) and Venhuizen et al (2019) show that the amount of change in the network's output can be expressed in terms of word surprisal. Frank and Vigliocco further demonstrate that this predicts simulated wordprocessing time, that is, their model provides a mechanistic account of why higher surprisal leads to longer reading time.…”
Section: Rnns For Sentence Comprehensionmentioning
confidence: 99%
“…In this paper, we take the latter approach by building upon previous work by Venhuizen et al [ 33 ] (henceforth, VCB), who put forward a model of language comprehension in which surprisal estimates are derived from the probabilistic, distributed meaning representations that the model constructs on a word-by-word basis. By systematically manipulating the model’s linguistic experience (the linguistic input history of the model) and world knowledge (the probabilistic knowledge captured within the representations), VCB show that, like human comprehenders, the model’s comprehension-centric surprisal estimates are sensitive to both of these information sources.…”
Section: Introductionmentioning
confidence: 99%
“…In what follows, we first introduce the probabilistic, distributed meaning representations used by VCB [ 33 ], from a novel, formal semantic perspective (cf. [ 35 ]) ( Section 2.1 ).…”
Section: Introductionmentioning
confidence: 99%