2016
DOI: 10.1016/j.visres.2016.10.004
|View full text |Cite
|
Sign up to set email alerts
|

Allocentric information is used for memory-guided reaching in depth: A virtual reality study

Abstract: Previous research has demonstrated that humans use allocentric information when reaching to remembered visual targets, but most of the studies are limited to 2D space. Here, we study allocentric coding of memorized reach targets in 3D virtual reality. In particular, we investigated the use of allocentric information for memory-guided reaching in depth and the role of binocular and monocular (object size) depth cues for coding object locations in 3D space. To this end, we presented a scene with objects on a tab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
39
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 32 publications
(44 citation statements)
references
References 43 publications
5
39
0
Order By: Relevance
“…We replicated our previous findings (Fiehler et al, 2014;Klinghammer et al, 2015;Klinghammer et al, 2017;Klinghammer et al, 2016) showing that reaching trajectories and endpoints are systematically influenced by object shifts in the environment and that this influence increases with the number of shifted objects. The allocentric weights ranged from À0.13 to 0.44, indicating that reaching endpoints were affected by up to 44% by the object shifts.…”
Section: Discussionsupporting
confidence: 89%
See 1 more Smart Citation
“…We replicated our previous findings (Fiehler et al, 2014;Klinghammer et al, 2015;Klinghammer et al, 2017;Klinghammer et al, 2016) showing that reaching trajectories and endpoints are systematically influenced by object shifts in the environment and that this influence increases with the number of shifted objects. The allocentric weights ranged from À0.13 to 0.44, indicating that reaching endpoints were affected by up to 44% by the object shifts.…”
Section: Discussionsupporting
confidence: 89%
“…Previous work from our group demonstrated that targets for memory-guided reaching are coded with respect to other objects in the environment, i.e., in an allocentric reference frame (Fiehler, Wolf, Klinghammer, & Blohm, 2014, Klinghammer, Blohm, & Fiehler, 2015Klinghammer, Blohm, & Fiehler, 2017;Klinghammer, Schütz, Blohm, & Fiehler, 2016). For example, in the study of Fiehler et al (2014), participants were presented with a naturalistic breakfast scene that contained six objects on a table (table objects) and three objects in the environment (background objects).…”
Section: Introductionmentioning
confidence: 99%
“…This makes sense intuitively, since we have likely learned that objects that move do not make good landmarks. A more recent series of cue‐conflict studies looking at memory‐guided reach in naturalistic visual scenes with multiple allocentric objects has shown that allocentric weights may also depend on task relevancy of the allocentric cues …”
Section: Behavioral Aspects Of Landmark‐guided Reachingmentioning
confidence: 99%
“…A more recent series of cueconflict studies looking at memory-guided reach in naturalistic visual scenes with multiple allocentric objects has shown that allocentric weights may also depend on task relevancy of the allocentric cues. [46][47][48][49] Timing of the allocentric-to-egocentric transformation for action In order to use allocentric representations to aim reaches, they must somehow be transformed into egocentric commands for motion, that is, arm relative to shoulder. Does the visuomotor system wait until the last possible moment to perform this conversion, or does it do so at the first opportunity?…”
Section: Behavioral Aspects Of Landmark-guided Reachingmentioning
confidence: 99%
“…In order to study allocentric coding of reach targets in more naturalistic scenarios, recent work from our lab applied naturalistic 2D images of complex scenes (Fiehler et al, 2014; Klinghammer et al, 2015) or 3D virtual reality (Klinghammer et al, 2016). In these experiments, we presented naturalistic images of a breakfast scene containing multiple objects on a table and in the background.…”
Section: Introductionmentioning
confidence: 99%