Binocular disparities have a straightforward geometric relation to object depth, but the computation that humans use to turn disparity signals into depth percepts is neither straightforward nor well understood. One seemingly solid result, which came out of Wheatstone’s work in the 1830’s, is that the sign and magnitude of horizontal disparity predict the perceived depth of an object: ‘Positive’ horizontal disparities yield the perception of ‘far’ depth, ‘negative’ horizontal disparities yield the perception of ‘near’ depth, and variations in the magnitude of horizontal disparity monotonically increase or decrease the perceived extent of depth. Here we show that this classic link between horizontal disparity and the perception of ‘near’ versus ‘far’ breaks down when the stimuli are one-dimensional. For these stimuli, horizontal is not a privileged disparity direction. Instead of relying on horizontal disparities to determine their depth relative to that of two-dimensional stimuli, the visual system uses a disparity calculation that is non-veridical yet well suited to deal with the joint coding of disparity and orientation.
Recent psychophysical experiments suggest that humans can recover only relief structure from motion (SFM); i.e., an object's 3D shape can only be determined up to a stretching transformation along the line of sight. Here we propose a physiologically plausible model for the computation of relief SFM, which is also applicable to the related problem of motion parallax. We assume that the perception of depth from motion is related to the firing of a subset of MT neurons tuned to both velocity and disparity. The model MT neurons are connected to each other laterally to form modulatory interactions. The overall connectivity is such that when a zero-disparity velocity pattern is fed into the system, the most responsive neurons are not those tuned to zero disparity, but instead are those having preferred disparities consistent with the relief structure of the velocity pattern. The model computes the correct relief structure under a wide range of parameters and can also reproduce the SFM illusions involving coaxial cylinders. It is consistent with the psychophysical observation that subjects with stereo impairment are also deficient in perceiving motion parallax, and with the physiological data that the responses of direction- and disparity-tuned MT cells covary with the perceived surface order of bistable SFM stimuli.
Humans can recover the structure of a 3D object from motion cues alone. Recovery of structure from motion (SFM) from the projected 2D motion field of a rotating object has been studied almost exclusively in one particular condition, that in which the axis of rotation lies in the frontoparallel plane. Here, we assess the ability of humans to recover SFM in the general case, where the axis of rotation may be slanted out of the frontoparallel plane. Using elliptical cylinders whose cross section was constant along the axis of rotation, we find that, across a range of parameters, subjects accurately matched the simulated shape of the cylinder regardless of how much the axis of rotation is inclined away from the frontoparallel plane. Yet, we also find that subjects do not perceive the inclination of the axis of rotation veridically. This combination of results violates a relationship between perceived angle of inclination and perceived shape that must hold if SFM is to be recovered from the instantaneous velocity field. The contradiction can be resolved if the angular speed of rotation is not consistently estimated from the instantaneous velocity field. This, in turn, predicts that variation in object size along the axis of rotation can cause depth-order violations along the line of sight. This prediction was verified using rotating circular cones as stimuli. Thus, as the axis of rotation changes its inclination, shape constancy is maintained through a trade-off. Humans perceive the structure of the object relative to a changing axis of rotation as unchanging by introducing an inconsistency between the perceived speed of rotation and the first-order optic flow. The observed depth-order violations are the cost of the trade-off.
An object moving in depth produces retinal images that change in position over time by different amounts in the two eyes. This allows stereoscopic perception of motion in depth to be based on either one or both of two different visual signals: inter-ocular velocity differences, and binocular disparity change over time. Disparity change over time can produce the perception of motion in depth. However, demonstrating the same for inter-ocular velocity differences has proved elusive because of the difficulty of isolating this cue from disparity change (the inverse can easily be done). No physiological data are available, and existing psychophysical data are inconclusive as to whether inter-ocular velocity differences are used in primate vision. Here, we use motion adaptation to assess the contribution of inter-ocular velocity differences to the perception of motion in depth. If inter-ocular velocity differences contribute to motion in depth, we would expect that discriminability of direction of motion in depth should be improved after adaptation to frontoparallel motion. This is because an inter-ocular velocity difference is a comparison between two monocular frontoparallel motion signals, and because frontoparallel speed discrimination improves after motion adaptation. We show that adapting to frontoparallel motion does improve both frontoparallel speed discrimination and motion-in-depth direction discrimination. No improvement would be expected if only disparity change over time contributes to motion in depth. Furthermore, we found that frontoparallel motion adaptation diminishes discrimination of both speed and direction of motion in depth in dynamic random dot stereograms, in which changing disparity is the only cue available. The results provide strong evidence that inter-ocular velocity differences contribute to the perception of motion in depth and thus that the human visual system contains mechanisms for detecting differences in velocity between the two eyes' retinal images.
There are two possible binocular mechanisms for the detection of motion in depth. One is based on disparity changes over time and the other is based on interocular velocity differences. It has previously been shown that disparity changes over time can produce the perception of motion in depth. However, existing psychophysical and physiological data are inconclusive as to whether interocular velocity differences play a role in motion in depth perception. We studied this issue using the motion aftereffect, the illusory motion of static patterns that follows adaptation to real motion. We induced a differential motion aftereffect to the two eyes and then tested for motion in depth in a stationary random-dot pattern seen with both eyes. It has been shown previously that a differential translational motion aftereffect produces a strong perception of motion in depth. We show here that a rotational motion aftereffect inhibits this perception of motion in depth, even though a real rotation induces motion in depth. A non-horizontal translational motion aftereffect did not inhibit motion in depth. Together, our results strongly suggest that (1) pure interocular velocity differences can produce motion in depth, and (2) the illusory changes in position from the motion aftereffect are generated relatively late in the visual hierarchy, after binocular combination.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.