Multivariate neuroimaging studies indicate that the brain represents word and object concepts in a format that readily generalises across stimuli. Here we investigated whether this was true for neural representations of events described using sentences. Participants viewed sentences describing four events in different ways. Multivariate classifiers were trained to discriminate the four events using a subset of sentences, allowing us to test generalisation to novel sentences. We found that neural patterns in a left-lateralised network of frontal, temporal and parietal regions discriminated events in a way that generalised successfully over changes in the syntactic and lexical properties of the sentences used to describe them. In contrast, decoding in visual areas was sentence-specific and failed to generalise to novel sentences. In the reverse analysis, we tested for decoding of syntactic and lexical form, independent of the event being described. Regions displaying this coding were limited and largely fell outside the canonical semantic network. Our results indicate that a distributed neural network represents the meaning of event sentences in a way that is robust to changes in their structure and form. They suggest that the semantic system disregards the surface properties of stimuli in order to represent their underlying conceptual significance.
Multivariate neuroimaging studies indicate that the brain represents word and object concepts in a format that readily generalises across stimuli. Here we investigated whether this was true for neural representations of simple events described using sentences. Participants viewed sentences describing four events in different ways. Multivariate classifiers were trained to discriminate the four events using a subset of sentences, allowing us to test generalisation to novel sentences. We found that neural patterns in a left-lateralised network of frontal, temporal and parietal regions discriminated events in a way that generalised successfully over changes in the syntactic and lexical properties of the sentences used to describe them. In contrast, decoding in visual areas was sentence-specific and failed to generalise to novel sentences. In the reverse analysis, we tested for decoding of syntactic and lexical structure, independent of the event being described. Regions displaying this coding were limited and largely fell outside the canonical semantic network. Our results indicate that a distributed neural network represents the meaning of event sentences in a way that is robust to changes in their structure and form. They suggest that the semantic system disregards the surface properties of stimuli in order to represent their underlying conceptual significance.
Embodied theories of semantic cognition predict that brain regions involved in motion perception are engaged when people comprehend motion concepts expressed in language. Left lateral occipitotemporal cortex (LOTC) is implicated in both motion perception and motion concept processing but prior studies have produced mixed findings regarding which parts of this region are engaged by motion language. We scanned participants performing semantic judgements about sentences describing motion events and static events. We performed univariate analyses, multivariate pattern analyses (MVPA) and psychophysiological interaction (PPI) analyses to investigate the effect of motion on activity and connectivity in different parts of LOTC. In multivariate analyses that decoded whether a sentence described motion or not, the whole of LOTC showed above-chance level performance, with performance exceeding that of other brain regions. Univariate ROI analyses found that the middle part of LOTC was more active for motion events than static ones. Finally, PPI analyses found that when processing motion events, the middle and posterior parts of LOTC, overlapping with motion perception regions, increased their connectivity with cognitive control regions. Taken together, these results indicate that the whole of the LOTC responds differently to motion vs. static event descriptions, and that these effects are most pronounced in more posterior sites. These findings are consistent with embodiment accounts of semantic processing, and suggest that understanding verbal descriptions of motion engages areas of the occipitotemporal cortex involved in perceiving motion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.