2020
DOI: 10.1609/aaai.v34i05.6457
|View full text |Cite
|
Sign up to set email alerts
|

Probing Brain Activation Patterns by Dissociating Semantics and Syntax in Sentences

Abstract: The relation between semantics and syntax and where they are represented in the neural level has been extensively debated in neurosciences. Existing methods use manually designed stimuli to distinguish semantic and syntactic information in a sentence that may not generalize beyond the experimental setting. This paper proposes an alternative framework to study the brain representation of semantics and syntax. Specifically, we embed the highly-controlled stimuli as objective functions in learning sentence repres… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 25 publications
1
9
0
Order By: Relevance
“…These brain networks also overlap with classic semantic brain networks (Hagoort and Indefrey 2014;Huth et al 2016) to a great extent, at least under the spatial resolution of the current fMRI technique. This finding supports a popular research view that language networks as a whole are sensitive to both semantic and syntactic information (Blank et al 2016), and the brain networks activated by semantics and syntax are largely overlapped (Wang et al 2020).…”
Section: Cortical Representations Of Syntactic Featuressupporting
confidence: 85%
See 1 more Smart Citation
“…These brain networks also overlap with classic semantic brain networks (Hagoort and Indefrey 2014;Huth et al 2016) to a great extent, at least under the spatial resolution of the current fMRI technique. This finding supports a popular research view that language networks as a whole are sensitive to both semantic and syntactic information (Blank et al 2016), and the brain networks activated by semantics and syntax are largely overlapped (Wang et al 2020).…”
Section: Cortical Representations Of Syntactic Featuressupporting
confidence: 85%
“…A series of the following work used different types of representations to explore the language processing mechanism in the brain (Huth et al 2016;Gauthier and Levy 2019;Sun et al 2019;Toneva and Wehbe 2019;Jain et al 2020). To probe the sentence-level semantic and syntactic brain activation patterns, Wang et al (2020) proposed a two-channel variational autoencoder model to dissociate sentences into semantic and syntactic representations and separately associate them with brain imaging data to find feature-correlated brain regions. Wehbe et al (2014) presented an integrated computational model that incorporates multiple reading sub-processes to predict the detailed neural representation of diverse story features.…”
Section: Related Workmentioning
confidence: 99%
“…The author believes that language-cognition experiments combined with computational models can eliminate the research limitations above. For example, using computational models can separate different experimental variables and study the role of different language variables and cognitive functions based on neural activity data collected from natural texts [136,137,138]. With the continuous improvement of the performance of languagecomputation methods based on neural network methods, it is increasingly accurate to use models to separate different language features so that the visual and auditory perception, multimodal information fusion, and language in different regions of the brain can be calculated on the same batch of data.…”
Section: Correlating Multiple Linguistic Variables and Cognitive Func...mentioning
confidence: 99%
“…Our approach is primarily referenced to text-controlled generation, which transfers the knowledge by dissociating tan-gled representations. Cross-training disentangling methods (Chen et al, 2019a,b;Wang et al, 2020a) on the controlled text generation task, which are implemented in a VGVAE framework and guided by paraphrase reconstruction loss have inspired us a lot. In particular, the syntax input of the code can be conveyed via AST.…”
Section: Semanticmentioning
confidence: 99%