2015
DOI: 10.1111/ejn.12935
|View full text |Cite
|
Sign up to set email alerts
|

Automatic representation of a visual stimulus relative to a background in the right precuneus

Abstract: Our brains represent the position of a visual stimulus egocentrically, in either retinal or craniotopic coordinates. In addition, recent behavioral studies have shown that the stimulus position is automatically represented allocentrically relative to a large frame in the background. Here, we investigated neural correlates of the 'background coordinate' using an fMRI adaptation technique. A red dot was presented at different locations on a screen, in combination with a rectangular frame that was also presented … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(20 citation statements)
references
References 59 publications
0
20
0
Order By: Relevance
“…In a recent fMRI study that used a non-spatial shape judgment task (that most closely resembles our Color control task as opposed to our saccade tasks), Uchimura et al (2015) found adaptation effects for allocentric stimulus location in precuneus and MOG. These modulations disappeared when the allocentric landmark was reduced to a size comparable to the landmark that was used in the current study.…”
Section: Discussionmentioning
confidence: 77%
“…In a recent fMRI study that used a non-spatial shape judgment task (that most closely resembles our Color control task as opposed to our saccade tasks), Uchimura et al (2015) found adaptation effects for allocentric stimulus location in precuneus and MOG. These modulations disappeared when the allocentric landmark was reduced to a size comparable to the landmark that was used in the current study.…”
Section: Discussionmentioning
confidence: 77%
“…These findings revealed unknown roles played by these four areas in early visuospatial transformations of reach targets when an allocentric landmark is available to represent targets relative to it. This does not mean that they are exclusive for these functions: previous literature suggests that they likely perform multiple functions, including automatic coding of allocentric targets within large background, egocentric transformations for reach control, as well as general reach planning …”
Section: Neural Mechanisms For Allo‐to‐ego Conversion Of Remembered Rmentioning
confidence: 99%
“…These findings revealed unknown roles played by these four areas in early visuospatial transformations of reach targets when an allocentric landmark is available to represent targets relative to it. This does not mean that they are exclusive for these functions: previous literature suggests that they likely perform multiple functions, including automatic coding of allocentric targets within large background, 74 egocentric transformations for reach control, 15,75,76 as well as general reach planning. [77][78][79][80][81][82] Functional overview of allocentric, egocentric, and allo-to-ego transformations for reach Figure 5A provides an overview of the cortical regions and their functional connectivity for allocentric and egocentric coding of target direction in memory, conversion of allocentric to egocentric target representations, and egocentric reach directional selectivity for planning and execution.…”
Section: Neural Mechanisms For Allo-to-ego Conversion Of Remembered Rmentioning
confidence: 99%
“…Taken together, our results suggest that right pPrecuneus, right Pre-SMA and bilateral PMd are the best likely candidates for the Allo-Ego conversion in our tasks, although they likely also play other, more general functions in vision and reach planning. For example, precuneus has been observed to be active during other types of visual-motor dissociation tasks (Fernandez-Ruiz et al, 2007;Gertz & Fiehler, 2015;Gorbet & Sergio, 2016), and it was also implicated in the automatic coding of allocentric targets in large background coordinates (Uchimura et al, 2015) and processing spatial information for motor imagery (Cavanna & Trimble, 2006).…”
Section: Specific Areas Involved In the Allo-ego Conversionmentioning
confidence: 99%