The neural bases of haptically-guided grasp planning and execution are largely unknown, especially for stimuli having no visual representations. Therefore, we used functional magnetic resonance imaging (fMRI) to monitor brain activity during haptic exploration of novel 3D complex objects, subsequent grasp planning, and the execution of the pre-planned grasps. Haptic object exploration, involving extraction of shape, orientation, and length of the to-be-grasped targets, was associated with the fronto-parietal, temporo-occipital, and insular cortex activity. Yet, only the anterior divisions of the posterior parietal cortex (PPC) of the right hemisphere were significantly more engaged in exploration of complex objects (vs. simple control disks). None of these regions were re-recruited during the planning phase. Even more surprisingly, the left-hemisphere intraparietal, temporal, and occipital areas that were significantly invoked for grasp planning did not show sensitivity to object features. Finally, grasp execution, involving the re-recruitment of the critical right-hemisphere PPC clusters, was also significantly associated with two kinds of bilateral parieto-frontal processes. The first represents transformations of grasp-relevant target features and is linked to the dorso-dorsal (lateral and medial) parieto-frontal networks. The second monitors grasp kinematics and belongs to the ventro-dorsal networks. Indeed, signal modulations associated with these distinct functions follow dorso-ventral gradients, with left aIPS showing significant sensitivity to both target features and the characteristics of the required grasp. Thus, our results from the haptic domain are consistent with the notion that the parietal processing for action guidance reflects primarily transformations from object-related to effector-related coding, and these mechanisms are rather independent of sensory input modality.
Functional magnetic resonance imaging (fMRI) adaptation (a.k.a. repetition suppression) paradigm was used to test if semantic information contained in object-related (transitive) pantomimes and communicative (intransitive) gestures is represented differently in the occipito-temporal cortex. Participants watched 2.75 s back-to-back videos where the meaning of gesture was either repeated or changed. The just observed (typically second) gesture was then imitated. To maintain participants' attention, some trials contained a single video. fMRI adaptation -signal decreases-for watching both movement categories were observed particularly in the lateral occipital cortex, including the extrastriate body area (EBA). Yet, intransitive (vs. transitive) gesture specific repetition suppression was found mainly in the left rostral EBA and caudal middle temporal gyrus-the rEBA/cMTG complex. Repetition enhancement (signal increase) was revealed in the precuneus. While the whole brain and region-of-interest analyses indicate that the precuneus is involved only in visuospatial action processing for later imitation, the common EBA repetition suppression discloses sensitivity to the meaning of symbolic gesture, namely the "semantic what" of actions. Moreover, the rEBA/cMTG suppression reveals greater selectivity for conventionalized communicative gesture. Thus, fMRI adaptation shows higher-order functions of EBA, its role in the semantic network, and indicates that its functional repertoire is wider than previously thought.Recent behavioral, neuroimaging, and neuropsychological evidence [1][2][3][4] indicates that performance of meaningful hand movements typically engages a common left-lateralized praxis representation network (PRN) 2 , regardless of whether these are object-(e.g., tool use/transitive) or non-object-related (intransitive) gestures. Moreover, there is now convincing evidence that the latter category of skilled movements (also referred to as communicative gestures) invokes these same neural resources less than pantomimed tool use 1,2 . These conclusions are, nevertheless, based almost entirely on research involving simulated actions retrieved from stored representations 2,5 or gesture imitation 4,6,7
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.