Recent investigations indicate that retinal motion is not directly available for perception when moving around [Souman JL, et al. (2010) J Vis 10:14], possibly pointing to suppression of retinal speed sensitivity in motion areas. Here, we investigated the distribution of retinocentric and head-centric representations of selfrotation in human lower-tier visual motion areas. Functional MRI responses were measured to a set of visual self-motion stimuli with different levels of simulated gaze and simulated head rotation. A parametric generalized linear model analysis of the blood oxygen level-dependent responses revealed subregions of accessory V3 area, V6 + area, middle temporal area, and medial superior temporal area that were specifically modulated by the speed of the rotational flow relative to the eye and head. Pursuit signals, which link the two reference frames, were also identified in these areas. To our knowledge, these results are the first demonstration of multiple visual representations of self-motion in these areas. The existence of such adjacent representations points to early transformations of the reference frame for visual self-motion signals and a topography by visual reference frame in lower-order motion-sensitive areas. This suggests that visual decisions for action and perception may take into account retinal and head-centric motion signals according to task requirements.hen a child eyeballs the seeker while exposing as little as possible of its head from behind cover, it performs an exquisite piece of eye and head control. Such visually coordinated actions rely on fast and dynamic integration of selfmotion signals from different reference frames (1) to control the rotations of the head, the eye, and other body parts.Motion relative to the eye is directly given on the retina. In the monkey lower-tier visual areas, neurons respond vigorously to such retinal motion (2, 3). Motion relative to the head or other body parts must be derived from visual (4) and nonvisual (5) information on the eye's rotation (6). Many monkey cortical areas also contain neurons that take into account the eye's rotation when responding to visual motion (7-10). Remarkably, a number of recent psychophysical studies have concluded that retinal motion signals are not directly available for perception (11-13), in particular during self-motion (11), whereas headcentric motion signals are.This raises the interesting question how retinal and headcentric representations of visual motion are distributed across the lower-tier visual motion areas. If subjects cannot attend to retinal motion signals, it could indicate that retinal motion sensitivity gives way to head-centric motion signals in higher tier areas, such as the posterior parietal cortex (PPC). Alternatively, signals in both reference frames may be present up to higher visual areas, but not available for certain visual tasks or under certain movement conditions. We investigated if the visual system builds up visual representations of eye-in-space and head-in-space motion. T...
We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions.
Recent fMRI studies have shown fusion of visual motion and disparity signals for shape perception (Ban et al., 2012), and unmasking camouflaged surfaces (Rokers et al., 2009), but no such interaction is known for typical dorsal motion pathway tasks, like grasping and navigation. Here, we investigate human speed perception of forward motion and its representation in the human motion network. We observe strong interaction in medial (V3ab, V6) and lateral motion areas (MT+), which differ significantly. Whereas the retinal disparity dominates the binocular contribution to the BOLD activity in the anterior part of area MT+, headcentric disparity modulation of the BOLD response dominates in area V3ab and V6. This suggests that medial motion areas not only represent rotational speed of the head (Arnoldussen et al., 2011), but also translational speed of the head relative to the scene. Interestingly, a strong response to vergence eye movements was found in area V1, which showed a dependency on visual direction, just like vertical-size disparity. This is the first report of a vertical-size disparity correlate in human striate cortex.
Registration of ego-motion is important to accurately navigate through space. Movements of the head and eye relative to space are registered through the vestibular system and optical flow, respectively. Here, we address three questions concerning the visual registration of self-rotation. (1) Eye-in-head movements provide a link between the motion signals received by sensors in the moving eye and sensors in the moving head. How are these signals combined into an ego-rotation percept? We combined optic flow of simulated forward and rotational motion of the eye with different levels of eye-in-head rotation for a stationary head. We dissociated simulated gaze rotation and head rotation by different levels of eye-in-head pursuit. We found that perceived rotation matches simulated head- not gaze-rotation. This rejects a model for perceived self-rotation that relies on the rotation of the gaze line. Rather, eye-in-head signals serve to transform the optic flow's rotation information, that specifies rotation of the scene relative to the eye, into a rotation relative to the head. This suggests that transformed visual self-rotation signals may combine with vestibular signals. (2) Do transformed visual self-rotation signals reflect the arrangement of the semi-circular canals (SCC)? Previously, we found sub-regions within MST and V6+ that respond to the speed of the simulated head rotation. Here, we re-analyzed those Blood oxygenated level-dependent (BOLD) signals for the presence of a spatial dissociation related to the axes of visually simulated head rotation, such as have been found in sub-cortical regions of various animals. Contrary, we found a rather uniform BOLD response to simulated rotation along the three SCC axes. (3) We investigated if subject's sensitivity to the direction of the head rotation axis shows SCC axes specifcity. We found that sensitivity to head rotation is rather uniformly distributed, suggesting that in human cortex, visuo-vestibular integration is not arranged into the SCC frame.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.