Three-dimensional (3D) representations of the environment are often critical for selecting actions that achieve desired goals. The success of these goal-directed actions relies on 3D sensorimotor transformations that are experience-dependent. Here we investigated the relationships between the robustness of 3D visual representations, choice-related activity, and motor-related activity in parietal cortex. Macaque monkeys performed an eight-alternative 3D orientation discrimination task and a visually guided saccade task while we recorded from the caudal intraparietal area using laminar probes. We found that neurons with more robust 3D visual representations preferentially carried choice-related activity. Following the onset of choice-related activity, the robustness of the 3D representations further increased for those neurons. We additionally found that 3D orientation and saccade direction preferences aligned, particularly for neurons with choice-related activity, reflecting an experience-dependent sensorimotor association. These findings reveal previously unrecognized links between the fidelity of ecologically relevant object representations, choice-related activity, and motor-related activity.
36Reconstructing three-dimensional (3D) scenes from two-dimensional (2D) retinal images is an ill-37 posed problem. Despite this, our 3D perception of the world based on 2D retinal images is 38 seemingly accurate and precise. The integration of distinct visual cues is essential for robust 3D 39 perception in humans, but it is unclear if this mechanism is conserved in non-human primates, 40 and how the underlying neural architecture constrains 3D perception. Here we assess 3D 41 perception in macaque monkeys using a surface orientation discrimination task. We find that 42 perception is generally accurate, but precision depends on the spatial pose of the surface and 43 available cues. The results indicate that robust perception is achieved by dynamically reweighting 44 the integration of stereoscopic and perspective cues according to their pose-dependent 45 reliabilities. They further suggest that 3D perception is influenced by a prior for the 3D orientation 46 statistics of natural scenes. We compare the data to simulations based on the responses of 3D 47 orientation selective neurons. The results are explained by a model in which two independent 48 neuronal populations representing stereoscopic and perspective cues (with perspective signals 49 from the two eyes combined using nonlinear canonical computations) are optimally integrated 50 through linear summation. Perception of combined-cue stimuli is optimal given this architecture. 51However, an alternative architecture in which stereoscopic cues and perspective cues detected 52 by each eye are represented by three independent populations yields two times greater precision 53 than observed. This implies that, due to canonical computations, cue integration for 3D perception 54 is optimized but not maximized. 55 56 Author summary 57Our eyes only sense two-dimensional projections of the world (like a movie on a screen), yet we 58 perceive the world in three dimensions. To create reliable 3D percepts, the human visual system 59 integrates distinct visual signals according to their reliabilities, which depend on conditions such 60 as how far away an object is located and how it is oriented. Here we find that non-human primates 61 similarly integrate different 3D visual signals, and that their perception is influenced by the 3D 62 orientation statistics of natural scenes. Cue integration is thus a conserved mechanism for 63 creating robust 3D percepts by the primate brain. Using simulations of neural population activity, 64 based on neuronal recordings from the same animals, we show that some computations which 65 occur widely in the brain facilitate 3D perception, while others hinder perception. This work 66addresses key questions about how neural systems solve the difficult problem of generating 3D 67 percepts, identifies a plausible neural architecture for implementing robust 3D vision, and reveals 68 how neural computation can simultaneously optimize and curb perception. 69 were linearly summed [10]). However, an alternative architecture in which stereoscopic as we...
Intercepting and avoiding moving objects requires accurate motion-in-depth (MID) perception. Such motion can be estimated based on both binocular and monocular cues. Because previous studies largely characterized sensitivity to these cues individually, their relative contributions to MID perception remain unclear. Here we measured sensitivity to binocular, monocular, and combined cue MID stimuli using a motion coherence paradigm. We first confirmed prior reports of substantial variability in binocular MID cue sensitivity across the visual field. The stimuli were matched for eccentricity and speed, suggesting that this variability has a neural basis. Second, we determined that monocular MID cue sensitivity also varied considerably across the visual field. A major component of this variability was geometric: An MID stimulus produces the largest motion signals in the eye contralateral to its visual field location. This resulted in better monocular discrimination performance when the contralateral rather than ipsilateral eye was stimulated. Third, we found that monocular cue sensitivity generally exceeded, and was independent of, binocular cue sensitivity. Finally, contralateral monocular cue sensitivity was found to be a strong predictor of combined cue sensitivity. These results reveal distinct factors constraining the contributions of binocular and monocular cues to three-dimensional motion perception.
The visual system exploits multiple signals, including monocular and binocular cues, to determine the motion of objects through depth. In the laboratory, sensitivity to different three-dimensional (3D) motion cues varies across observers and is often weak for binocular cues. However, laboratory assessments may reflect factors beyond inherent perceptual sensitivity. For example, the appearance of weak binocular sensitivity may relate to extensive prior experience with two-dimensional (2D) displays in which binocular cues are not informative. Here we evaluated the impact of experience on motion-in-depth (MID) sensitivity in a virtual reality (VR) environment. We tested a large cohort of observers who reported having no prior VR experience and found that binocular cue sensitivity was substantially weaker than monocular cue sensitivity. As expected, sensitivity was greater when monocular and binocular cues were presented together than in isolation. Surprisingly, the addition of motion parallax signals appeared to cause observers to rely almost exclusively on monocular cues. As observers gained experience in the VR task, sensitivity to monocular and binocular cues increased. Notably, most observers were unable to distinguish the direction of MID based on binocular cues above chance level when tested early in the experiment, whereas most showed statistically significant sensitivity to binocular cues when tested late in the experiment. This result suggests that observers may discount binocular cues when they are first encountered in a VR environment. Laboratory assessments may thus underestimate the sensitivity of inexperienced observers to MID, especially for binocular cues. OPEN ACCESS Citation: Fulvio JM, Ji M, Thompson L, Rosenberg A, Rokers B (2020) Cue-dependent effects of VR experience on motion-in-depth sensitivity. PLoS ONE 15(3): e0229929. https://doi.org/10.
Robust 3-D visual perception is achieved by integrating stereoscopic and perspective cues. The canonical model describing the integration of these cues assumes that perspective signals sensed by the left and right eyes are indiscriminately pooled into a single representation that contributes to perception. Here, we show that this model fails to account for 3-D motion perception. We measured the sensitivity of male macaque monkeys to 3-D motion signaled by left-eye perspective cues, right-eye perspective cues, stereoscopic cues, and all three cues combined. The monkeys exhibited idiosyncratic differences in their biases and sensitivities for each cue, including left- and right-eye perspective cues, suggesting that the signals undergo at least partially separate neural processing. Importantly, sensitivity to combined cue stimuli was greater than predicted by the canonical model, which previous studies found to account for the perception of 3-D orientation in both humans and monkeys. Instead, 3-D motion sensitivity was best explained by a model in which stereoscopic cues were integrated with left- and right-eye perspective cues whose representations were at least partially independent. These results indicate that the integration of perspective and stereoscopic cues is a shared computational strategy across 3-D processing domains. However, they also reveal a fundamental difference in how left- and right-eye perspective signals are represented for 3-D orientation versus motion perception. This difference results in more effective use of available sensory information in the processing of 3-D motion than orientation and may reflect the temporal urgency of avoiding and intercepting moving objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.