2018
DOI: 10.1016/j.neuroimage.2018.05.043
|View full text |Cite
|
Sign up to set email alerts
|

Investigating common coding of observed and executed actions in the monkey brain using cross-modal multi-variate fMRI classification

Abstract: Mirror neurons are generally described as a neural substrate hosting shared representations of actions, by simulating or 'mirroring' the actions of others onto the observer's own motor system. Since single neuron recordings are rarely feasible in humans, it has been argued that cross-modal multi-variate pattern analysis (MVPA) of non-invasive fMRI data is a suitable technique to investigate common coding of observed and executed actions, allowing researchers to infer the presence of mirror neurons in the human… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

17
27
3

Year Published

2019
2019
2021
2021

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(47 citation statements)
references
References 110 publications
(221 reference statements)
17
27
3
Order By: Relevance
“…Finally, a recent study on ventral and dorsal premotor mirror neurons demonstrated that in both these nodes of the cortical action observation network the probability to get strict visuomotor congruence between motor and visual representations of grip type in single neurons was at chance level (Papadourakis and Raos 2018). Although in the execution mode we could not test the same variety of actions as in the observation mode, the present and previous (Papadourakis and Raos 2018) findings, suggest that in most areas of the action observation network (Bonini 2017; Bruni et al 2018; Fiave et al 2018) the encoding of executed and observed actions recruits largely overlapping sets of neurons, which nonetheless may specify highly distinct variants of the encoded features when switching between the visual and motor modes of action representation.…”
Section: Discussioncontrasting
confidence: 81%
See 1 more Smart Citation
“…Finally, a recent study on ventral and dorsal premotor mirror neurons demonstrated that in both these nodes of the cortical action observation network the probability to get strict visuomotor congruence between motor and visual representations of grip type in single neurons was at chance level (Papadourakis and Raos 2018). Although in the execution mode we could not test the same variety of actions as in the observation mode, the present and previous (Papadourakis and Raos 2018) findings, suggest that in most areas of the action observation network (Bonini 2017; Bruni et al 2018; Fiave et al 2018) the encoding of executed and observed actions recruits largely overlapping sets of neurons, which nonetheless may specify highly distinct variants of the encoded features when switching between the visual and motor modes of action representation.…”
Section: Discussioncontrasting
confidence: 81%
“…Significantly, we have been able to directly match our neurophysiological findings of a tuning of pAIP neurons to self and other’s observed actions with the neuro-anatomical evidence, obtained in the same animals, of 3 rostral-to-caudally increasing connectivity gradients. Compared with the intermediate and rostral levels of AIP, pAIP displays stronger connections with 1) a set of visual areas of the ventral stream that convey information about object features (Sary et al 1993; Logothetis et al 1995; Saleem and Tanaka 1996; Koteles et al 2008; Hong et al 2016) and observed actions (Perrett et al 1989; Nelissen et al 2011), in particular the dynamic body shape changes defining the action (Vangeneugden et al 2009), 2) prefrontal cortical areas, including visually recipients areas 12r and 46 v (Borra et al 2011; Gerbella et al 2013), involved in manual action planning (Bruni et al 2015; Simone et al 2015) and observation (Raos and Savaki 2017; Simone et al 2017; Fiave et al 2018), and 3) oculomotor regions, including area LIP, which may drive spatial attention processes aimed at proactively capturing goals and targets of others’ observed actions (Flanagan and Johansson 2003; Falck-Ytter et al 2006; Elsner et al 2013; Maranesi et al 2013; Lanzilotto et al 2017). The specificity of this anatomofunctional association is underscored by the absence of a gradient in the connections with dorsovisual and skeletomotor related areas, as well as a reversed caudal-to-rostral incremental gradient for the connections with a large set of mainly parietal somatosensory regions, consistent with previous studies (Lewis and Van Essen 2000; Borra et al 2008; Baumann et al 2009).…”
Section: Discussionmentioning
confidence: 99%
“…There is also some evidence for decoding of visual objects based on the pattern of responses in somatosensory regions (e.g., Meyer et al, 2011; Smith and Goodale, 2013), but the univariate responses in these studies are very weak. Consistent with this prior literature (Chan and Baker, 2015; Kilintari et al, 2016; Fiave et al, 2018), we did not observe any responses to the sight of static hands and feet in sensorimotor cortex of our control participants in any session. Thus, the strong responsiveness to visually presented limbs (particularly the amputated limb) appears to be a specific effect following major limb amputation.…”
Section: Discussionsupporting
confidence: 93%
“…Visual responses are not typically reported in sensorimotor cortex of healthy individuals, especially to static images. There have been reports of responses to the sight of touch in somatosensory areas (Keysers et al, 2004; Schaefer et al, 2009, Schaefer et al, 2013; Kuehn et al, 2013, Kuehn et al, 2014, Kuehn et al, 2018), but in an earlier study we found that the sight of a hand being brushed elicited responses in regions of posterior parietal cortex that did not overlap with the somatosensory representations of the hand (Chan and Baker, 2015), with similar results recently reported in monkeys (Fiave et al, 2018). There is also some evidence for decoding of visual objects based on the pattern of responses in somatosensory regions (e.g., Meyer et al, 2011; Smith and Goodale, 2013), but the univariate responses in these studies are very weak.…”
Section: Discussionsupporting
confidence: 91%
“…Electrophysiological and neuroimaging studies in macaques have shown that, from a functional motor point of view, neurons in F5p and the dorsal portion of F5c exhibit responses during skilled manual motor acts such as object grasping or manipulation with the hand (di Pellegrino et al, 1992;Fiave et al, 2018;Fluet et al, 2010;Gallese et al, 1996;Gentilucci et al, 1988;Kraskov et al, 2009;Murata et al, 1997;Nelissen et al, 2018;Nelissen and Vanduffel, 2011;Raos et al, 2006;Sharma et al, 2018). A subset of the grasping-related motor neurons in F5p and dorsal F5c (Bonini et al, 2014) also discharge selectively to the visual presentation of graspable 3D objects, and it has been suggested that these so-called canonical neurons play a crucial role in transforming object properties such as size, shape and orientation into appropriate potential motor programs for hand-object interactions Jeannerod et al, 1995).…”
Section: Introductionmentioning
confidence: 99%