. The deeper layers of the midbrain superior colliculus (SC) contain a topographic motor map in which a localized population of cells is recruited for each saccade, but how the brain stem decodes the dynamic SC output is unclear. Here we analyze saccade-related responses in the monkey SC to test a new dynamic ensemble-coding model, which proposes that each spike from each saccade-related SC neuron adds a fixed, site-specific contribution to the intended eye movement command. As predicted by this simple theory, we found that the cumulative number of spikes in the cell bursts is tightly related to the displacement of the eye along the ideal straight trajectory, both for normal saccades and for strongly curved, blink-perturbed saccades toward a single visual target. This dynamic relation depends systematically on the metrics of the saccade displacement vector, and can be fully predicted from a quantitative description of the cell's classical movement field. Furthermore, we show that a linear feedback model of the brain stem, which is driven by dynamic linear vector summation of measured SC firing patterns, produces realistic two-dimensional (2D) saccade trajectories and kinematics. We conclude that the SC may act as a nonlinear, vectorial saccade generator that programs an optimal straight eye-movement trajectory. I N T R O D U C T I O NThe midbrain superior colliculus (SC) is a sensorimotor interface that is critically involved in the control of rapid gaze shifts. An important problem in understanding its role in gaze control is how the spatial distribution of movement-related activity in its motor map is ultimately transformed into the temporal code carried by motor neurons (Sparks and HartwichYoung 1989). In this study, we analyzed saccade-related responses in the monkey SC to test a new theoretical framework for the involvement of the SC in the generation of saccades in two dimensions (2D). First, we present a novel analysis of SC spike trains that provides evidence for dynamic vector summation of movement contributions provided by each spike of each cell in the active population. We then analyze the spatialtemporal distribution of SC activity. The results are used to test the predictions and emerging properties of our new ensemblecoding theory, which assumes dynamic, linear decoding of the SC population activity by the brain stem saccade generator. Finally, we propose and test a new quantitative description of dynamic SC movement fields that is implied by our theory. In what follows, we first explain why a new approach is called for by highlighting the main findings that have led to several controversies. These controversies include 1) static versus dynamic involvement of the SC, 2) vector summation versus vector averaging of the population activity, and 3) feedforward versus feedback involvement of the SC. Earlier theories and controversies
The coordination between eye and head movements during a rapid orienting gaze shift has been investigated mainly when subjects made horizontal movements towards visual targets with the eyes starting at the centre of the orbit. Under these conditions, it is difficult to identify the signals driving the two motor sys tems, because their initial motor errors are identical and equal to the coordinates o f the sensory stimulus (i.e. reti nal error). In this paper, we investigate head-free gaze saccades of human subjects towards visual as w ell as au ditory stimuli presented in the two-dimensional frontal plane, under both aligned and unaligned initial fixation conditions. Although the basic patterns for eye and head movements were qualitatively comparable for both stim ulus modalities, systematic differences were also ob tained under aligned conditions, suggesting a task-de pendent movement strategy. Auditory-evoked gaze shifts were endowed with smaller eye-head latency differences, consistently larger head movements and smaller concom itant ocular saccades than visually triggered movements. By testing gaze control for eccentric initial eye positions, we found that the head displacement vector was best re lated to the initial head motor-error (target-re-head), rather than to the initial gaze error (target-re-eye), re gardless of target modality. These findings suggest an in dependent control of the eye and head motor systems by commands in different frames of reference. However, we also observed a systematic influence o f the oculomotor response on the properties of the evoked head move ments, indicating a subtle coupling between the two sys tems. The results are discussed in view of current eyehead coordination models.
In this paper, we show that human saccadic eye movements toward a visual target are generated with a reduced latency when this target is spatially and temporally aligned with an irrelevant auditory nontarget. This effect gradually disappears if the temporal and/or spatial alignment of the visual and auditory stimuli are changed. When subjects are able to accurately localize the auditory stimulus in two dimensions, the spatial dependence of the reduction in latency depends on the actual radial distance between the auditory and the visual stimulus. If, however, only the azimuth of the sound source can be determined by the subjects, the horizontal target separation determines the strength of the interaction. Neither saccade accuracy nor saccade kinematics were affected in these paradigms. Wepropose that, in addition to an aspecific warning signal, the reduction of saccadic latency is due to interactions that take place at a multimodal stage of saccade programming, where the perceived positions of visual and auditory stimuli are represented in a common frame of reference. This hypothesis is in agreement with our finding that the saccades often are initially directed to the average position of the visual and the auditory target, provided that their spatial separation is not too large. Striking similarities with electrophysiological findings on multisensory interactions in the deep layers of the midbrain superior colliculus are discussed.Humans, as well as other animals, are equipped with various specialized senses that provide them with information about their environment. Several of these sensory systems represent the spatial location of an object on the basis ofthe received sensory input. This information about stimulus location can already be present at the level of the sensory organ, as is the case in the visual and somatosensory systems, or it can be neurally derived on the basis ofindirect cues, as in the auditory system. Many of the objects that surround an organism, however, provide it with sensory information through various modalities at the same time.In the literature, there is accumulating evidence that multimodal information about an object's location can lead to a reduction of the response latency and to an improvement of localization accuracy. For example, it has been shown that a motor response toward a visual target can be made with a shorter latency when this target is accompanied by an auditory signal at the same location. Simon and Craft (1970)
Because the inner ear is not organized spatially, sound localization relies on the neural processing of implicit acoustic cues. To determine a sound's position, the brain must learn and calibrate these cues, using accurate spatial feedback from other sensorimotor systems. Experimental evidence for such a system has been demonstrated in barn owls, but not in humans. Here, we demonstrate the existence of ongoing spatial calibration in the adult human auditory system. The spectral elevation cues of human subjects were disrupted by modifying their outer ears (pinnae) with molds. Although localization of sound elevation was dramatically degraded immediately after the modification, accurate performance was steadily reacquired. Interestingly, learning the new spectral cues did not interfere with the neural representation of the original cues, as subjects could localize sounds with both normal and modified pinnae.
Monaurally deaf people lack the binaural acoustic difference cues in sound level and timing that are needed to encode sound location in the horizontal plane (azimuth). It has been proposed that these people therefore rely on spectral pinna cues of their normal ear to localize sounds. However, the acoustic head-shadow effect (HSE) might also serve as an azimuth cue, despite its ambiguity when absolute sound levels are unknown. Here, we assess the contribution of either cue in the monaural deaf to two-dimensional (2D) sound localization. In a localization test with randomly interleaved sound levels, we show that all monaurally deaf listeners relied heavily on the HSE, whereas binaural control listeners ignore this cue. However, some monaural listeners responded partly to actual sound-source azimuth, regardless of sound level. We show that these listeners extracted azimuth information from their pinna cues. The better monaural listeners were able to localize azimuth on the basis of spectral cues, the better their ability to also localize sound-source elevation. In a subsequent localization experiment with one fixed sound level, monaural listeners rapidly adopted a strategy on the basis of the HSE. We conclude that monaural spectral cues are not sufficient for adequate 2D sound localization under unfamiliar acoustic conditions. Thus, monaural listeners strongly rely on the ambiguous HSE, which may help them to cope with familiar acoustic environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.