2011
DOI: 10.1523/jneurosci.2921-10.2011
|View full text |Cite
|
Sign up to set email alerts
|

Heterogeneous Representations in the Superior Parietal Lobule Are Common across Reaches to Visual and Proprioceptive Targets

Abstract: The planning and control of sensory-guided movements requires the integration of multiple sensory streams. Although the information conveyed by different sensory modalities is often overlapping, the shared information is represented differently across modalities during the early stages of cortical processing. We ask how these diverse sensory signals are represented in multimodal sensorimotor areas of cortex in macaque monkeys. While a common modality-independent representation might facilitate downstream reado… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

7
96
1

Year Published

2013
2013
2020
2020

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 91 publications
(104 citation statements)
references
References 61 publications
7
96
1
Order By: Relevance
“…SPL contained representations for both spatial target location and movement direction, which is consistent with single-neuron recordings in nonhuman primates (Lacquaniti et al, 1995;McGuire and Sabes, 2011;Bremner and Andersen, 2012), fMRI results in humans (Connolly et al, 2003;Medendorp et al, 2003), and behavioral findings in optic ataxia patients (Khan et al, 2013). Furthermore, the interaction between spatial target and movement direction in SPL (i.e., the enhanced generalization accuracy when target information was available) suggests that SPL might perform computations involving both features that are crucial for a transformation.…”
Section: Discussionsupporting
confidence: 59%
See 1 more Smart Citation
“…SPL contained representations for both spatial target location and movement direction, which is consistent with single-neuron recordings in nonhuman primates (Lacquaniti et al, 1995;McGuire and Sabes, 2011;Bremner and Andersen, 2012), fMRI results in humans (Connolly et al, 2003;Medendorp et al, 2003), and behavioral findings in optic ataxia patients (Khan et al, 2013). Furthermore, the interaction between spatial target and movement direction in SPL (i.e., the enhanced generalization accuracy when target information was available) suggests that SPL might perform computations involving both features that are crucial for a transformation.…”
Section: Discussionsupporting
confidence: 59%
“…Although the present study focused on transformations between features, a future challenge is to identify loci for transformations of the same feature between multiple frames of reference (Ogawa and Inui, 2012). Because reference frames are often mixed within neural populations (Kakei et al, 1999;Chang and Snyder, 2010;McGuire and Sabes, 2011), parsing out these representations with fMRI will require a more sophisticated approach, such as finding voxel subsets that best code for a movement property or reference frame of interest. The ability to characterize both between-feature and within-feature interactions will give a more complete understanding of large-scale representations underlying sensorimotor transformations for goal-directed movement.…”
Section: Discussionmentioning
confidence: 99%
“…1, the medial PPC includes areas V6A, medial intraparietal (MIP), PE, PEc, and PGm (Bakola et al 2010(Bakola et al , 2013Goldman-Rakic 1989a, 1989b;Colby and Duhamel 1991;Galletti et al 1999;Pandya and Seltzer 1982). All these areas form the reaching network of the superior parietal lobule (SPL) Fattori et al 2001Fattori et al , 2005Ferraina et al 1997;McGuire and Sabes 2011;Snyder et al 1997). …”
mentioning
confidence: 99%
“…Based on functional and anatomical evidence, caudal SPL areas such as V6A are thought to rely primarily on visual input, whereas rostral SPL areas such as PE are thought to mainly process proprioceptive input (Bakola et al 2013;Ferraina et al 2009;Gamberini et al 2009;McGuire and Sabes 2011;Passarelli et al 2011;Shi et al 2013). These differences might be important if we consider that natural arm movements are usually performed in three-dimensional (3D) space, and there is behavioral evidence suggesting that movement in depth relies more on proprioceptive than visual input, whereas vision is more crucial for controlling the direction of movement (Monaco et al 2010;Sainburg et al 2003;van Beers et al 1998van Beers et al , 2002van Beers et al , 2004.…”
mentioning
confidence: 99%
“…Data were then collected in 12 sessions for Monkey D and in 14 sessions for Monkey E. The experimental data used in this study were also used in a previous study (Chaisanguanthum et al, 2014), and additional experimental details can be found there. Briefly, reach targets (7 cm from initial hand position) and visual feedback were provided via a virtual reality setup (McGuire and Sabes, 2011). For each trial, animals moved their hand to a fixed "start" position, and a reach target was then presented.…”
Section: Methodsmentioning
confidence: 99%