2021
DOI: 10.1101/2021.11.12.467812
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Graded sensitivity to structure and meaning throughout the human language network

Abstract: How are syntactically and semantically connected word sequences, or constituents, represented in the human language system? An influential fMRI study, Pallier et al. (2011, PNAS), manipulated the length of constituents in sequences of words or pseudowords. They reported that some language regions (in the anterior temporal cortex and near the temporo-parietal junction) were sensitive to constituent length only for sequences of real words but not pseudowords. In contrast, language regions in the inferior frontal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
16
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1
1

Relationship

3
3

Authors

Journals

citations
Cited by 15 publications
(22 citation statements)
references
References 155 publications
5
16
1
Order By: Relevance
“…Finally, we use a recent method to decompose these neural activations into syntactic and semantic representations (33), and show that the longrange forecasts are predominantly driven by semantic features. This finding strengthens the idea that while syntax may be explicitly represented in neural activity (52,53), predicting high-level meaning may be at the core of language processing (54,55).…”
Section: Syntactic and Semantic Predictionssupporting
confidence: 83%
“…Finally, we use a recent method to decompose these neural activations into syntactic and semantic representations (33), and show that the longrange forecasts are predominantly driven by semantic features. This finding strengthens the idea that while syntax may be explicitly represented in neural activity (52,53), predicting high-level meaning may be at the core of language processing (54,55).…”
Section: Syntactic and Semantic Predictionssupporting
confidence: 83%
“…One constraint in this space of hypotheses has to do with the size of the language network's 'temporal integration (or receptive) window' (e.g., Lerner et al, 2011). In particular, previous work has shown that the temporal integration window of the language network is relatively short, on the order of a clause or sentence (e.g., Lerner et al, 2011;Blank & Fedorenko, 2020;Shain et al, 2021). It is therefore likely that non-verbal meanings that the language system is concerned with have to do with events, but not longertimescale representations like situation models whose construction can span long sequences of events (Johnson-Laird, 1983;Zwaan & Radvansky, 1998;Loschky et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…However, an architecture where different kinds of syntactic dependencies are supported by distinct mechanisms seems unlikely. Indeed, in language comprehension, the brain areas that are sensitive to simple two-word composition (e.g., Pallier et al, 2011;Shain et al, 2021b) are also engaged for the processing of non-local dependencies (e.g., Blank et al, 2016;Shain, Blank et al, 2020;Shain et al, 2021a).…”
Section: Lack Of Selectivity For Sentence Generation Relative To Lexi...mentioning
confidence: 99%
“…To illuminate the contribution of the language-selective network to language production, we use fMRI to examine the responses of the language areas-defined in individual participants by an extensively validated comprehension-based language localizer (Fedorenko et al, 2010)during production tasks. To examine both phrase-structure building and lexical access using this precision fMRI approach, we adapt a paradigm that has proven fruitful in probing combinatorial and lexico-semantic processes in comprehension (e.g., Friederici et al, 2000;Humphries et al, 2007;Fedorenko et al, 2010Fedorenko et al, , 2012aPallier et al, 2011;Shain et al, 2021). In particular, we examine neural responses during spoken (Experiments 1-2) and typed (Experiment 3) production of sentences and lists of words (as well as control, nonword sequences in Experiments 1 and 3).…”
Section: Main Textmentioning
confidence: 99%
See 1 more Smart Citation