2020
DOI: 10.1073/pnas.2007018117
|View full text |Cite
|
Sign up to set email alerts
|

Stable readout of observed actions from format-dependent activity of monkey’s anterior intraparietal neurons

Abstract: Humans accurately identify observed actions despite large dynamic changes in their retinal images and a variety of visual presentation formats. A large network of brain regions in primates participates in the processing of others’ actions, with the anterior intraparietal area (AIP) playing a major role in routing information about observed manipulative actions (OMAs) to the other nodes of the network. This study investigated whether the AIP also contributes to invariant coding of OMAs across different visual f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
30
0
2

Year Published

2020
2020
2025
2025

Publication Types

Select...
5
3
1

Relationship

4
5

Authors

Journals

citations
Cited by 29 publications
(33 citation statements)
references
References 47 publications
1
30
0
2
Order By: Relevance
“…(2017). In the monkey, this rostral part of the lower bank of STS region projects to AIP (Lanzilotto et al, 2019), and the visual signals related to hand–target relationships probably enter into the definition of the visual identity of OMAs (combination of the observed goal of the action and perceived body movements bringing about this result) at that level (Lanzilotto et al, 2020). AIP neurons are tuned to OMAs (Lanzilotto et al, 2019), and if those tuned to flick and push are equal in number, little difference between the two action conditions is expected.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…(2017). In the monkey, this rostral part of the lower bank of STS region projects to AIP (Lanzilotto et al, 2019), and the visual signals related to hand–target relationships probably enter into the definition of the visual identity of OMAs (combination of the observed goal of the action and perceived body movements bringing about this result) at that level (Lanzilotto et al, 2020). AIP neurons are tuned to OMAs (Lanzilotto et al, 2019), and if those tuned to flick and push are equal in number, little difference between the two action conditions is expected.…”
Section: Discussionmentioning
confidence: 99%
“…Categorical distinctions of observed actions (OAs) have been found in LOTC, in particular the abstract action categories transitivity and sociality (Wurm, Caramazza, & Lingnau, 2017), and more recently, the action components such as body parts, scenes, movements, objects, sociality, and transitivity (Tucciarelli, Wurm, Baccolo, & Lingnau, 2019). On the other hand, recent evidence (Lanzilotto et al, 2019(Lanzilotto et al, , 2020 indicates that PPC regions process the visual identity of OAs, in a similar way to the processing by the ventral pathway of the visual identity of objects (Hung, Kreiman, Poggio, & DiCarlo, 2005), and in particular of faces (Chang & Tsao, 2017). By visual identity of OAs, we refer to the integration of the goal of the action, that is, the change in the outside world it aims to produce, and the body movements of the conspecific that allow this goal to be reached.…”
Section: Introductionmentioning
confidence: 99%
“…For example, individual neurons responsive to the observation of others' manipulative actions (grasping, dragging, etc.) in anterior intraparietal area show viewpoint-dependent coding, but as a population they provide viewpoint-invariant coding of the observed action (Lanzilotto et al, 2020;see also Livi et al, 2019, for a similar demonstration of population encoding of observed actions in presupplementary motor area).…”
Section: Reflectionmentioning
confidence: 93%
“…Videos were presented in a pseudorandom manner: All conditions were randomly ordered and presented once before repetition. Video stimuli used for L0, L1, and F0 were also used in (44), which tested neural encoding of observed actions in nonhuman primates. Presentation of the videos differed, however, as before video presentation, during baseline, this study used a highly blurred (full width at half maximum = 80 pixels) static frame (average of all video frames).…”
Section: Tasks and Stimulimentioning
confidence: 99%