2021
DOI: 10.1016/j.neuroimage.2021.118511
|View full text |Cite
|
Sign up to set email alerts
|

Decoding grip type and action goal during the observation of reaching-grasping actions: A multivariate fMRI study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 93 publications
2
9
0
Order By: Relevance
“…The MVPA results also indicate a high-level accuracy ($80%) in decoding between PLDs and fully visible actions in SPL and in PMd. This is not surprising because, as previously described, both the dorsal and ventral parietal and premotor areas are involved in the processing of reaching but also of some features of the observed grasping, such as specific grip configuration (Errante et al, 2021;Errante & Fogassi, 2019). Here, however, the decoding accuracy in SPL and PMd could not be explained only by differences in reaching movement features or grip configuration because both PLDs and fully visible actions were matched for these characteristics.…”
Section: Differential Eaon Contribution In the Processing Of Observed...supporting
confidence: 83%
See 3 more Smart Citations
“…The MVPA results also indicate a high-level accuracy ($80%) in decoding between PLDs and fully visible actions in SPL and in PMd. This is not surprising because, as previously described, both the dorsal and ventral parietal and premotor areas are involved in the processing of reaching but also of some features of the observed grasping, such as specific grip configuration (Errante et al, 2021;Errante & Fogassi, 2019). Here, however, the decoding accuracy in SPL and PMd could not be explained only by differences in reaching movement features or grip configuration because both PLDs and fully visible actions were matched for these characteristics.…”
Section: Differential Eaon Contribution In the Processing Of Observed...supporting
confidence: 83%
“…The activation of IPL and PMv is in line with the results of a large body of studies on action observation of fully visible stimuli (Caspers et al, 2010 ; Hardwick et al, 2018 ), thus suggesting the involvement of a common motor resonance mechanism in both PLDs and fully visible grasping stimuli. These regions can be involved in coding action goal and specific aspects of performed acts, for example, grip type and action outcome (Binkofski et al, 1999 ; Errante et al, 2021 ; Grafton & Hamilton, 2007 ). The activation also involves areas within the so‐called “dorsal circuit” such as PMd, SPL, and SPOC (Cavina‐Pratesi et al, 2010 ; Filimon et al, 2007 ; Gallivan et al, 2009 ; Gazzola & Keysers, 2009 ), usually considered as involved in the observation, as well as in the execution of reaching motor acts.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…This interpretation may be criticised as grasping and talking may be considered the 'what' an agent is doing. It is important to note that while others gaze contributes to action prediction abilities [52,53], action observation studies typically focus the observer's attention on the actor's upper limb without displaying the actor's gaze and face [54][55][56][57][58]. Studying the temporal order of action is a relatively new area of inquiry [59], which one day may refine the criteria defining the 'what' and the 'why' of the observed action.…”
Section: Is Predicting Action From Gaze Equivalent To Intention Reading?mentioning
confidence: 99%