Establishing a coherent internal reference frame for visuospatial representation and maintaining the integrity of this frame during eye movements are thought to be crucial for both perception and motor control. A stable headcentric representation could be constructed by internally comparing retinal signals with eye position. Alternatively, visual memory traces could be actively remapped within an oculocentric frame to compensate for each eye movement. We tested these models by measuring errors in manual pointing (in complete darkness) toward briefly flashed central targets during three oculomotor paradigms; subjects pointed accurately when gaze was maintained on the target location (control paradigm). However, when steadily fixating peripheral locations (static paradigm), subjects exaggerated the retinal eccentricity of the central target by 13.4 +/- 5.1%. In the key "dynamic" paradigm, subjects briefly foveated the central target and then saccaded peripherally before pointing toward the remembered location of the target. Our headcentric model predicted accurate pointing (as seen in the control paradigm) independent of the saccade, whereas our oculocentric model predicted misestimation (as seen in the static paradigm) of an internally shifted retinotopic trace. In fact, pointing errors were significantly larger than were control errors (p = 0.003) and were indistinguishable (p >/= 0.25) from the static paradigm errors. Scatter plots of pointing errors (dynamic vs static paradigm) for various final fixation directions showed an overall slope of 0.97, contradicting the headcentric prediction (0. 0) and supporting the oculocentric prediction (1.0). Varying both fixation and pointing-target direction confirmed that these errors were a function of retinotopically shifted memory traces rather than eye position per se. To reconcile these results with previous pointing experiments, we propose a "conversion-on-demand" model of visuomotor control in which multiple visual targets are stored and rotated (noncommutatively) within the oculocentric frame, whereas only select targets are transformed further into head- or bodycentric frames for motor execution.
Cressman EK, Henriques DYP. Sensory recalibration of hand position following visuomotor adaptation.
The location of a remembered reach target can be encoded in egocentric and/or allocentric reference frames. Cortical mechanisms for egocentric reach are relatively well described, but the corresponding allocentric representations are essentially unknown. Here, we used an event-related fMRI design to distinguish human brain areas involved in these two types of representation. Our paradigm consisted of three tasks with identical stimulus display but different instructions: egocentric reach (remember absolute target location), allocentric reach (remember target location relative to a visual landmark), and a nonspatial control, color report (report color of target). During the delay phase (when only target location was specified), the egocentric and allocentric tasks elicited widely overlapping regions of cortical activity (relative to the control), but with higher activation in parietofrontal cortex for egocentric task and higher activation in early visual cortex for allocentric tasks. In addition, egocentric directional selectivity (target relative to gaze) was observed in the superior occipital gyrus and the inferior occipital gyrus, whereas allocentric directional selectivity (target relative to a visual landmark) was observed in the inferior temporal gyrus and inferior occipital gyrus. During the response phase (after movement direction had been specified either by reappearance of the visual landmark or a pro-/anti-reach instruction), the parietofrontal network resumed egocentric directional selectivity, showing higher activation for contralateral than ipsilateral reaches. These results show that allocentric and egocentric reach mechanisms use partially overlapping but different cortical substrates and that directional specification is different for target memory versus reach response.
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Motor adaptation in response to a visuomotor distortion arises when the usual motor command no longer results in the predicted sensory output. In this study, we examined if exposure to a sensory discrepancy was sufficient on its own to produce changes in reaches and recalibrate the sense of felt hand position in the absence of any voluntary movements. Subjects pushed their hand out along a robot-generated fixed linear path (active exposure group) or were passively moved along the same path (passive exposure group). This fixed path was gradually rotated counterclockwise around the home position with respect to the path of the cursor. On all trials, subjects saw the cursor head directly to the remembered target position while their hand moved outwards. We found that after exposure to the visually distorted hand motion, subjects in both groups adapted their reaches such that they aimed ∼6° to the left of the intended target. The magnitude of reach adaptation was similar to the extent that subjects recalibrated their sense of felt hand position. Specifically the position at which subjects perceived their unseen hand to be aligned with a reference marker was the same as that to which they reached when allowed to move freely. Given the similarity in magnitude of these adaptive responses we propose that reach adaptation arose due to changes in subjects' sense of felt hand position. Moreover, results indicate that motor adaptation can arise following exposure to a sensory mismatch in the absence of movement related error signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.