Three-dimensional (3D) representations of the environment are often critical for selecting actions that achieve desired goals. The success of these goal-directed actions relies on 3D sensorimotor transformations that are experience-dependent. Here we investigated the relationships between the robustness of 3D visual representations, choice-related activity, and motor-related activity in parietal cortex. Macaque monkeys performed an eight-alternative 3D orientation discrimination task and a visually guided saccade task while we recorded from the caudal intraparietal area using laminar probes. We found that neurons with more robust 3D visual representations preferentially carried choice-related activity. Following the onset of choice-related activity, the robustness of the 3D representations further increased for those neurons. We additionally found that 3D orientation and saccade direction preferences aligned, particularly for neurons with choice-related activity, reflecting an experience-dependent sensorimotor association. These findings reveal previously unrecognized links between the fidelity of ecologically relevant object representations, choice-related activity, and motor-related activity.
36Reconstructing three-dimensional (3D) scenes from two-dimensional (2D) retinal images is an ill-37 posed problem. Despite this, our 3D perception of the world based on 2D retinal images is 38 seemingly accurate and precise. The integration of distinct visual cues is essential for robust 3D 39 perception in humans, but it is unclear if this mechanism is conserved in non-human primates, 40 and how the underlying neural architecture constrains 3D perception. Here we assess 3D 41 perception in macaque monkeys using a surface orientation discrimination task. We find that 42 perception is generally accurate, but precision depends on the spatial pose of the surface and 43 available cues. The results indicate that robust perception is achieved by dynamically reweighting 44 the integration of stereoscopic and perspective cues according to their pose-dependent 45 reliabilities. They further suggest that 3D perception is influenced by a prior for the 3D orientation 46 statistics of natural scenes. We compare the data to simulations based on the responses of 3D 47 orientation selective neurons. The results are explained by a model in which two independent 48 neuronal populations representing stereoscopic and perspective cues (with perspective signals 49 from the two eyes combined using nonlinear canonical computations) are optimally integrated 50 through linear summation. Perception of combined-cue stimuli is optimal given this architecture. 51However, an alternative architecture in which stereoscopic cues and perspective cues detected 52 by each eye are represented by three independent populations yields two times greater precision 53 than observed. This implies that, due to canonical computations, cue integration for 3D perception 54 is optimized but not maximized. 55 56 Author summary 57Our eyes only sense two-dimensional projections of the world (like a movie on a screen), yet we 58 perceive the world in three dimensions. To create reliable 3D percepts, the human visual system 59 integrates distinct visual signals according to their reliabilities, which depend on conditions such 60 as how far away an object is located and how it is oriented. Here we find that non-human primates 61 similarly integrate different 3D visual signals, and that their perception is influenced by the 3D 62 orientation statistics of natural scenes. Cue integration is thus a conserved mechanism for 63 creating robust 3D percepts by the primate brain. Using simulations of neural population activity, 64 based on neuronal recordings from the same animals, we show that some computations which 65 occur widely in the brain facilitate 3D perception, while others hinder perception. This work 66addresses key questions about how neural systems solve the difficult problem of generating 3D 67 percepts, identifies a plausible neural architecture for implementing robust 3D vision, and reveals 68 how neural computation can simultaneously optimize and curb perception. 69 were linearly summed [10]). However, an alternative architecture in which stereoscopic as we...
Modern neuroscience research often requires the coordination of multiple processes such as stimulus generation, real-time experimental control, as well as behavioral and neural measurements. The technical demands required to simultaneously manage these processes with high temporal fidelity is a barrier that limits the number of labs performing such work. Here we present an open-source, network-based parallel processing framework that lowers this barrier. The Real-Time Experimental Control with Graphical User Interface (REC-GUI) framework offers multiple advantages: (i) a modular design that is agnostic to coding language(s) and operating system(s) to maximize experimental flexibility and minimize researcher effort, (ii) simple interfacing to connect multiple measurement and recording devices, (iii) high temporal fidelity by dividing task demands across CPUs, and (iv) real-time control using a fully customizable and intuitive GUI. We present applications for human, non-human primate, and rodent studies which collectively demonstrate that the REC-GUI framework facilitates technically demanding, behavior-contingent neuroscience research.Editorial note: This article has been through an editorial process in which the authors decide how to respond to the issues raised during peer review. The Reviewing Editor's assessment is that all the issues have been addressed (see decision letter).
The visual system must reconstruct the dynamic, three-dimensional (3D) world from ambiguous two-dimensional (2D) retinal images. In this review, we synthesize current literature on how the visual system of nonhuman primates performs this transformation through multiple channels within the classically defined dorsal (where) and ventral (what) pathways. Each of these channels is specialized for processing different 3D features (e.g., the shape, orientation, or motion of objects, or the larger scene structure). Despite the common goal of 3D reconstruction, neurocomputational differences between the channels impose distinct information-limiting constraints on perception. Convergent evidence further points to the little-studied area V3A as a potential branchpoint from which multiple 3D-fugal processing channels diverge. We speculate that the expansion of V3A in humans may have supported the emergence of advanced 3D spatial reasoning skills. Lastly, we discuss future directions for exploring 3D information transmission across brain areas and experimental approaches that can further advance the understanding of 3D vision. Expected final online publication date for the Annual Review of Vision Science, Volume 9 is September 2023. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Visually guided behaviors require the brain to transform ambiguous retinal images into object-level spatial representations and implement sensorimotor transformations. These processes are supported by the dorsal 'where' pathway. However, the specific functional contributions of areas along this pathway remain elusive due in part to methodological differences across studies. We previously showed that macaque caudal intraparietal (CIP) area neurons possess robust three-dimensional (3D) visual representations, carry choice- and saccade-related activity, and exhibit experience-dependent sensorimotor associations (Chang et al., 2020b). Here, we used a common experimental design to reveal parallel processing, hierarchical transformations, and the formation of sensorimotor associations along the 'where' pathway by extending the investigation to V3A, a major feedforward input to CIP. Higher-level 3D representations and choice-related activity were more prevalent in CIP than V3A. Both areas contained saccade-related activity that predicted the direction/timing of eye movements. Intriguingly, the time-course of saccade-related activity in CIP aligned with the temporally integrated V3A output. Sensorimotor associations between 3D orientation and saccade direction preferences were stronger in CIP than V3A, and moderated by choice signals in both areas. Together, the results explicate parallel representations, hierarchical transformations, and functional associations of visual and saccade-related signals at a key juncture in the 'where' pathway.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.