Human observers combine multiple sensory cues synergistically to achieve greater perceptual sensitivity, but little is known about the underlying neuronal mechanisms. We recorded from neurons in the dorsal medial superior temporal area (MSTd) during a task in which trained monkeys combine visual and vestibular cues near optimally to discriminate heading. During bimodal stimulation, MSTd neurons combine visual and vestibular inputs linearly with sub-additive weights. Neurons with congruent heading preferences for visual and vestibular stimuli show improvements in sensitivity that parallel behavioral effects. In contrast, neurons with opposite preferences show diminished sensitivity under cue combination. Responses of congruent cells are more strongly correlated with monkeys' perceptual decisions than opposite cells, suggesting that the animal monitors the activity of congruent cells to a greater extent during cue integration. These findings demonstrate perceptual cue integration in non-human primates and identify a population of neurons that may form its neural basis.Understanding how the brain combines different sources of sensory information to optimize perception is a fundamental problem in neuroscience. Information from different sensory modalities is often seamlessly integrated into a unified percept. Combining sensory inputs leads to improved behavioral performance in many contexts, including integration of texture and motion cues for depth perception 1 , stereo and texture cues for slant perception 2,3 , visual-haptic integration 4,5 , visual-auditory localization 6 , and object recognition 7 . Multisensory integration in human behavior often follows predictions of a quantitative framework that applies Bayesian statistical inference to the problem of cue integration 8-10 . An important prediction is that subjects show greater perceptual sensitivity when two cues are presented together than when either cue is presented alone. This improvement in sensitivity is largest (a factor of √2) when the two cues have equal reliability 5,10 .Despite intense recent interest in cue integration, the underlying neural mechanisms remain unclear. Improved perceptual performance during cue integration is thought to be mediated by neurons selective for multiple sensory stimuli 11 . Multi-modal neurons have been described in several brain areas 12,13 , but these studies have typically been performed in anesthetized or passively viewing animals 14-17 . Multi-modal neurons have not been studied during
Robust perception of self-motion requires integration of visual motion signals with nonvisual cues. Neurons in the dorsal subdivision of the medial superior temporal area (MSTd) may be involved in this sensory integration, because they respond selectively to global patterns of optic flow, as well as translational motion in darkness. Using a virtual-reality system, we have characterized the three-dimensional (3D) tuning of MSTd neurons to heading directions defined by optic flow alone, inertial motion alone, and congruent combinations of the two cues. Among 255 MSTd neurons, 98% exhibited significant 3D heading tuning in response to optic flow, whereas 64% were selective for heading defined by inertial motion. Heading preferences for visual and inertial motion could be aligned but were just as frequently opposite. Moreover, heading selectivity in response to congruent visual/vestibular stimulation was typically weaker than that obtained using optic flow alone, and heading preferences under congruent stimulation were dominated by the visual input. Thus, MSTd neurons generally did not integrate visual and nonvisual cues to achieve better heading selectivity. A simple two-layer neural network, which received eye-centered visual inputs and head-centered vestibular inputs, reproduced the major features of the MSTd data. The network was trained to compute heading in a head-centered reference frame under all stimulus conditions, such that it performed a selective reference-frame transformation of visual, but not vestibular, signals. The similarity between network hidden units and MSTd neurons suggests that MSTd may be an early stage of sensory convergence involved in transforming optic flow information into a (head-centered) reference frame that facilitates integration with vestibular signals.
Recent findings of vestibular responses in visual cortex -the dorsal medial superior temporal area (MSTd) -suggest that vestibular signals might contribute to cortical processes mediating selfmotion perception. We tested this hypothesis in monkeys trained to perform a fine heading discrimination task based solely on inertial motion cues. Neuronal sensitivity was typically lower than psychophysical sensitivity, and only the most sensitive neurons rivaled behavioral performance. MSTd responses were significantly correlated with perceptual decisions, with correlations being strongest for the most sensitive neurons. These results support a functional link between MSTd and heading perception based on inertial motion cues. These cues are mainly of vestibular origin, since labyrinthectomy produced dramatic elevation of psychophysical thresholds and abolished MSTd responses. This study provides the first evidence that links single-unit activity to spatial perception mediated by vestibular signals, and suggests that the role of MSTd in self-motion perception extends beyond optic flow processing. The vestibular apparatus provides sensory information about the angular velocity (semicircular canals) and linear acceleration (otolith organs) of the head 1, 2 . Lesion studies demonstrate that vestibular signals play critical roles in several reflexive processes, including compensatory eye movements (vestibulo-ocular reflex, VOR) 3, 4 , maintenance of balance, and control of posture 5 . The neural circuits that mediate these automatic processes-especially the VORhave been explored extensively 6, 7 .The vestibular system should also contribute to processes that are under cognitive control, such as perception of spatial orientation and self-motion 8-10 . To investigate this, we trained rhesus monkeys to report their perceived direction of heading (leftward vs. rightward relative to straight ahead) based solely on inertial motion cues. We show that trained animals discriminate differences in heading as small as 1-2° (comparable to human performance 11 ), and that damage to the vestibular labyrinth dramatically impairs performance. This establishes the heading discrimination task as a sensitive probe of vestibular function relevant to self-motion perception.Where in the brain can one find neurons that mediate heading perception based on inertial motion cues? Unlike in other sensory systems, relatively little is known about the cortical processing of vestibular signals. Whereas neural activity has been linked to perception in other
SUMMARY Responses of neurons in early visual cortex change little with training, and appear insufficient to account for perceptual learning. Behavioral performance, however, relies on population activity, and the accuracy of a population code is constrained by correlated noise among neurons. We tested whether training changes interneuronal correlations in the dorsal medial superior temporal area, which is involved in multisensory heading perception. Pairs of single units were recorded simultaneously in two groups of subjects: animals trained extensively in a heading discrimination task, and “naïve” animals that performed a passive fixation task. Correlated noise was significantly weaker in trained versus naïve animals, which might be expected to improve coding efficiency. However, we show that the observed uniform reduction in noise correlations leads to little change in population coding efficiency when all neurons are decoded. Thus, global changes in correlated noise among sensory neurons may be insufficient to account for perceptual learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.