SUMMARY Responses of neurons in early visual cortex change little with training, and appear insufficient to account for perceptual learning. Behavioral performance, however, relies on population activity, and the accuracy of a population code is constrained by correlated noise among neurons. We tested whether training changes interneuronal correlations in the dorsal medial superior temporal area, which is involved in multisensory heading perception. Pairs of single units were recorded simultaneously in two groups of subjects: animals trained extensively in a heading discrimination task, and “naïve” animals that performed a passive fixation task. Correlated noise was significantly weaker in trained versus naïve animals, which might be expected to improve coding efficiency. However, we show that the observed uniform reduction in noise correlations leads to little change in population coding efficiency when all neurons are decoded. Thus, global changes in correlated noise among sensory neurons may be insufficient to account for perceptual learning.
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues.DOI: http://dx.doi.org/10.7554/eLife.04693.001
Three-dimensional (3D) representations of the environment are often critical for selecting actions that achieve desired goals. The success of these goal-directed actions relies on 3D sensorimotor transformations that are experience-dependent. Here we investigated the relationships between the robustness of 3D visual representations, choice-related activity, and motor-related activity in parietal cortex. Macaque monkeys performed an eight-alternative 3D orientation discrimination task and a visually guided saccade task while we recorded from the caudal intraparietal area using laminar probes. We found that neurons with more robust 3D visual representations preferentially carried choice-related activity. Following the onset of choice-related activity, the robustness of the 3D representations further increased for those neurons. We additionally found that 3D orientation and saccade direction preferences aligned, particularly for neurons with choice-related activity, reflecting an experience-dependent sensorimotor association. These findings reveal previously unrecognized links between the fidelity of ecologically relevant object representations, choice-related activity, and motor-related activity.
36Reconstructing three-dimensional (3D) scenes from two-dimensional (2D) retinal images is an ill-37 posed problem. Despite this, our 3D perception of the world based on 2D retinal images is 38 seemingly accurate and precise. The integration of distinct visual cues is essential for robust 3D 39 perception in humans, but it is unclear if this mechanism is conserved in non-human primates, 40 and how the underlying neural architecture constrains 3D perception. Here we assess 3D 41 perception in macaque monkeys using a surface orientation discrimination task. We find that 42 perception is generally accurate, but precision depends on the spatial pose of the surface and 43 available cues. The results indicate that robust perception is achieved by dynamically reweighting 44 the integration of stereoscopic and perspective cues according to their pose-dependent 45 reliabilities. They further suggest that 3D perception is influenced by a prior for the 3D orientation 46 statistics of natural scenes. We compare the data to simulations based on the responses of 3D 47 orientation selective neurons. The results are explained by a model in which two independent 48 neuronal populations representing stereoscopic and perspective cues (with perspective signals 49 from the two eyes combined using nonlinear canonical computations) are optimally integrated 50 through linear summation. Perception of combined-cue stimuli is optimal given this architecture. 51However, an alternative architecture in which stereoscopic cues and perspective cues detected 52 by each eye are represented by three independent populations yields two times greater precision 53 than observed. This implies that, due to canonical computations, cue integration for 3D perception 54 is optimized but not maximized. 55 56 Author summary 57Our eyes only sense two-dimensional projections of the world (like a movie on a screen), yet we 58 perceive the world in three dimensions. To create reliable 3D percepts, the human visual system 59 integrates distinct visual signals according to their reliabilities, which depend on conditions such 60 as how far away an object is located and how it is oriented. Here we find that non-human primates 61 similarly integrate different 3D visual signals, and that their perception is influenced by the 3D 62 orientation statistics of natural scenes. Cue integration is thus a conserved mechanism for 63 creating robust 3D percepts by the primate brain. Using simulations of neural population activity, 64 based on neuronal recordings from the same animals, we show that some computations which 65 occur widely in the brain facilitate 3D perception, while others hinder perception. This work 66addresses key questions about how neural systems solve the difficult problem of generating 3D 67 percepts, identifies a plausible neural architecture for implementing robust 3D vision, and reveals 68 how neural computation can simultaneously optimize and curb perception. 69 were linearly summed [10]). However, an alternative architecture in which stereoscopic as we...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.