2020
DOI: 10.1101/2020.09.28.316935
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Combining computational controls with natural text reveals new aspects of meaning composition

Abstract: To study a core component of human intelligence—our ability to combine the meaning of words—neuroscientists look for neural correlates of meaning composition, such as brain activity proportional to the difficulty of understanding a sentence. However, little is known about the product of meaning composition in the brain—the combined meaning of words beyond their individual meaning. We term this product “supra-word meaning” and devise a computational representation for it by using recent neural network algorithm… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
34
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 25 publications
(37 citation statements)
references
References 84 publications
3
34
0
Order By: Relevance
“…In addition, we explored the neuro-anatomical More generally, this paper adds another data point demonstrating the relevance of tools from computational linguistics in cognitive neuroscience research Jain and Huth, 2018;Toneva et al, 2020) and the value of naturalistic stimuli in contextually situated and ecologically valid research (Maguire, 2012;Brennan, 2016;Hamilton and Huth, 2020).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, we explored the neuro-anatomical More generally, this paper adds another data point demonstrating the relevance of tools from computational linguistics in cognitive neuroscience research Jain and Huth, 2018;Toneva et al, 2020) and the value of naturalistic stimuli in contextually situated and ecologically valid research (Maguire, 2012;Brennan, 2016;Hamilton and Huth, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…1 As one salient example, Wehbe et al (2014) investigated how well vector representations predicted brain activity for subjects reading fiction, in their case material from Harry Potter and the Sorcerer's Stone, based on within-sentence context. Also working within the sentence using naturalistic listening, Toneva et al (2020) derived composed representations of "supraword meaning" using contextualized word representations (ELMo, Peters et al, 2018) to capture the compositional meaning of multi-word expressions and event/argument structure. Jain and Huth (2018) make predictions of neural activity using LSTM representations from up to the previous 20 words of context (which would be on the order of 8-10 seconds of speech on average).…”
Section: Neural Language Models In Cognitive Neurosciencementioning
confidence: 99%
“…Anderson et al, n.d. ; Toneva et al, 2021; Schrimpf et al, 2021). However, many of the deep learning models are not intended to match the human cognitive process.…”
Section: Discussionmentioning
confidence: 99%
“…; J. Brennan et al, 2012, 2016; Jackson et al, 2021; Hale et al, 2018; Martin & Doumas, 2019; Toneva & Wehbe, 2019; Toneva et al, 2021; Schrimpf et al, 2021; Wehbe et al, 2014). Here we compared the fit of five computational models for pronoun processing with the fMRI and MEG data within the LMTG functional regions of interest (fROIs) using representational similarity analysis (Kriegeskorte et al, 2008).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation