This is a pre-copy-editing, author-produced PDF of an article accepted for publication in Neuropsychologia following peer review. The definitive publisher-authenticated version (Liuzzi A.G., Bruffaerts R., Dupont P., Adamczuk K., Peeters R., De Deyne S., Storms G., Vandenberghe R. AbstractLeft perirhinal cortex has been previously implicated in associative coding. According to a recent experiment, the similarity of perirhinal fMRI response patterns to written concrete words is higher for words which are more similar in their meaning. If left perirhinal cortex functions as an amodal semantic hub, one would predict that this semantic similarity effect would extend to the spoken modality. We conducted an event-related fMRI experiment and evaluated whether a same semantic similarity effect could be obtained for spoken as for written words. Twenty healthy subjects performed a property verification task in either the written or the spoken modality. Words corresponded to concrete animate entities for which extensive feature generation was available from more than 1000 subjects. From these feature generation data, a concept-feature matrix was derived which formed the basis of a cosine similarity matrix between the entities reflecting their similarity in meaning (called the "semantic cossimilarity matrix"). Independently, we calculated a cosine similarity matrix between the left perirhinal fMRI activity patterns evoked by the words (called the "fMRI cossimilarity matrix"). Next, the similarity was determined between the semantic cossimilarity matrix and the fMRI cossimilarity matrix. This was done for written and spoken words pooled, for written words only, for spoken words only, as well as for crossmodal pairs. Only for written words did the fMRI cossimilarity matrix correlate with the semantic cossimilarity matrix. Contrary to our prediction, we did not find any such effect for auditory word input nor did we find cross-modal effects in perirhinal cortex between written and auditory words. Our findings situate the contribution of left perirhinal cortex to word processing at the top of the visual processing pathway, rather than at an amodal stage where visual and auditory word processing pathways have already converged.
How semantic representations are manifest over the brain remains a topic of active debate. A semantic representation may be determined by specific semantic features (e.g. sensorimotor information), or may abstract away from specific features and represent generalized semantic characteristics (general semantic representation). Here we tested whether nodes of the semantic system code for a general semantic representation and/or possess representational spaces linked to particular semantic features. in an fMRi study, eighteen participants performed a typicality judgment task with written words drawn from sixteen different categories. Multivariate pattern analysis (MVPA) and representational similarity analysis (RSA) were adopted to investigate the sensitivity of the brain regions to semantic content and the type of semantic representation coded (general or feature-based). We replicated previous findings of sensitivity to general semantic similarity in posterior middle/inferior temporal gyrus (pMTG/ITG) and precuneus (PC) and additionally observed general semantic representations in ventromedial prefrontal cortex (PFC). Finally, two brain regions of the semantic network were sensitive to semantic features: the left pMTG/ITG was sensitive to haptic perception and the left ventral temporal cortex (VTC) to size. This finding supports the involvement of both general semantic representation and feature-based representations in the brain's semantic system. Conceptual and semantic knowledge are fundamental aspects of human cognition and the investigation of the neural substrates underlying these processes is an ongoing topic of research in the cognitive neurosciences. Although current evidence has demonstrated that semantic knowledge is represented in a distributed manner over the brain 1,2 , the manner in which semantic representation is manifest remains a topic of active debate. The association damage to the anterior temporal lobe primary progressive aphasia, herpes encephalitis and lesions led to an emphasis of this brain regions as a critical locus for semantic processing 3,4. However, functional neuroimaging has suggested a broader range of regions are involved in semantic processing 5. A meta-analysis of 120 studies 6 identified a "general semantic network"-a left-lateralized network consisting of seven brain regions that were activated in a variety of semantic tasks: angular gyrus, lateral and ventral temporal cortex, ventromedial prefrontal cortex, inferior frontal gyrus, dorsal medial prefrontal cortex and the posterior cingulate gyrus. However, not all brain regions activated in semantic tasks necessarily represent semantic content, for instance regions may control access to semantic information rather than contain that information themselves 2,7,8. Starting from this assumption, Fairhall and Caramazza (2013) 9 identified a set of regions representing semantic content by means of Multivariate Pattern Analysis (MVPA) and Representational Similarity Analysis (RSA) 10. The authors showed that a left-lateralized netwo...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.