2020
DOI: 10.1016/j.neuroimage.2020.117281
|View full text |Cite
|
Sign up to set email alerts
|

Semantics-weighted lexical surprisal modeling of naturalistic functional MRI time-series during spoken narrative listening

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 21 publications
(25 citation statements)
references
References 75 publications
3
22
0
Order By: Relevance
“…Additionally, we controlled semantic similarity for each version of each text per condition following a probabilistic approach (Russo et al, 2020). Drawing on the mean frequency of use of content words, we calculated a semantic similarity index for each text per condition, formalized as the probability of encountering the target word given a set of content words, that is, P(P7 IF P1 ∩ P2 ∩ P3 ∩ P4 ∩ P5 ∩ P6 ∩ P8), where P1 to P6 correspond to the frequency distribution of context content words, P7 to the frequency distribution of the high predictable target word, and P8 to the frequency distribution of the low predictable word (see Supplementary Material).…”
Section: Lamentablemente éL Había Sido El Vencedormentioning
confidence: 99%
“…Additionally, we controlled semantic similarity for each version of each text per condition following a probabilistic approach (Russo et al, 2020). Drawing on the mean frequency of use of content words, we calculated a semantic similarity index for each text per condition, formalized as the probability of encountering the target word given a set of content words, that is, P(P7 IF P1 ∩ P2 ∩ P3 ∩ P4 ∩ P5 ∩ P6 ∩ P8), where P1 to P6 correspond to the frequency distribution of context content words, P7 to the frequency distribution of the high predictable target word, and P8 to the frequency distribution of the low predictable word (see Supplementary Material).…”
Section: Lamentablemente éL Había Sido El Vencedormentioning
confidence: 99%
“…These models are often rooted in computational algorithms that assign surprisal (the degree to which the word is expected given the preceding context) and entropy (the degree to which the word constrains upcoming linguistic content) values to each word in a sentence, reflecting how easily a word can be integrated given the left context and the overall statistics of the language (Futrell et al, 2020). Neuroimaging evidence indicates that activation in language-related brain areas correlates with difficulty as reflected by these information-theoretic measures (e.g., Russo et al, 2020;Henderson, Choi, Lowder, & Ferreira, 2016). Recent neural models of language processing locate the operation of combining two elements into a syntactic representation in Brodmann's area 44 of Broca's area, which forms a network for processing of syntactic complexity in combination with the superior temporal gyrus (Fedorenko & Blank, 2020;Zaccarella, Schell, & Friederici, 2017; for a competing view, see Matchin & Hickok, 2020).…”
Section: Psycholinguistic Issues In Sentence Processing Researchmentioning
confidence: 99%
“…The prediction of the next word from a given sequence also occurs in the human brain when continuously engaged in the generation of meaningful linguistic structures from the auditory streams of words perceived up to that moment, such as during natural listening 6 , 11 , 12 . In addition, it is an established principle of sensory perception that an attention mechanism is also needed in the human brain to improve the prediction mechanism via synergistic modulation of input signals 13 , 14 .…”
Section: Introductionmentioning
confidence: 99%