The brain integrates information from multiple sensory modalities and, through this process, generates a coherent and apparently seamless percept of the external world. Although multisensory integration typically binds information that is derived from the same event, when multisensory cues are somewhat discordant they can result in illusory percepts such as the "ventriloquism effect." These biases in stimulus localization are generally accompanied by the perceptual unification of the two stimuli. In the current study, we sought to further elucidate the relationship between localization biases, perceptual unification and measures of a participant's uncertainty in target localization (i.e., variability). Participants performed an auditory localization task in which they were also asked to report on whether they perceived the auditory and visual stimuli to be perceptually unified. The auditory and visual stimuli were delivered at a variety of spatial (0 degrees, 5 degrees, 10 degrees, 15 degrees ) and temporal (200, 500, 800 ms) disparities. Localization bias and reports of perceptual unity occurred even with substantial spatial (i.e., 15 degrees ) and temporal (i.e., 800 ms) disparities. Trial-by-trial comparison of these measures revealed a striking correlation: regardless of their disparity, whenever the auditory and visual stimuli were perceived as unified, they were localized at or very near the light. In contrast, when the stimuli were perceived as not unified, auditory localization was often biased away from the visual stimulus. Furthermore, localization variability was significantly less when the stimuli were perceived as unified. Intriguingly, on non-unity trials such variability increased with decreasing disparity. Together, these results suggest strong and potentially mechanistic links between the multiple facets of multisensory integration that contribute to our perceptual Gestalt.
The majority of multisensory neurons in the cat superior colliculus (SC) are able to synthesize cross-modal cues (e.g., visual and auditory) and thereby produce responses greater than those elicited by the most effective single modality stimulus and, sometimes, greater than those predicted by the arithmetic sum of their modality-specific responses. The present study examined the role of corticotectal inputs from two cortical areas, the anterior ectosylvian sulcus (AES) and the rostral aspect of the lateral suprasylvian sulcus (rLS), in producing these response enhancements. This was accomplished by evaluating the multisensory properties of individual SC neurons during reversible deactivation of these cortices individually and in combination using cryogenic deactivation techniques. Cortical deactivation eliminated the characteristic multisensory response enhancement of nearly all SC neurons but generally had little or no effect on a neuron's modality-specific responses. Thus, the responses of SC neurons to combinations of cross-modal stimuli were now no different from those evoked by one or the other of these stimuli individually. Of the two cortical areas, AES had a much greater impact on SC multisensory integrative processes, with nearly half the SC neurons sampled dependent on it alone. In contrast, only a small number of SC neurons depended solely on rLS. However, most SC neurons exhibited dual dependencies, and their multisensory enhancement was mediated by either synergistic or redundant influences from AES and rLS. Corticotectal synergy was evident when deactivating either cortical area compromised the multisensory enhancement of an SC neuron, whereas corticotectal redundancy was evident when deactivation of both cortical areas was required to produce this effect. The results suggest that, although multisensory SC neurons can be created as a consequence of a variety of converging tectopetal afferents that are derived from a host of subcortical and cortical structures, the ability to synthesize cross-modal inputs, and thereby produce an enhanced multisensory response, requires functional inputs from the AES, the rLS, or both.
The ability of a visual signal to influence the localization of an auditory target (i.e., "cross-modal bias") was examined as a function of the spatial disparity between the two stimuli and their absolute locations in space. Three experimental issues were examined: (a) the effect of a spatially disparate visual stimulus on auditory localization judgments; (b) how the ability to localize visual, auditory, and spatially aligned multisensory (visual-auditory) targets is related to cross-modal bias, and (c) the relationship between the magnitude of cross-modal bias and the perception that the two stimuli are spatially "unified" (i.e., originate from the same location). Whereas variability in localization of auditory targets was large and fairly uniform for all tested locations, variability in localizing visual or spatially aligned multisensory targets was much smaller, and increased with increasing distance from the midline. This trend proved to be strongly correlated with biasing effectiveness, for although visual-auditory bias was unexpectedly large in all conditions tested, it decreased progressively (as localization variability increased) with increasing distance from the midline. Thus, central visual stimuli had a substantially greater biasing effect on auditory target localization than did more peripheral visual stimuli. It was also apparent that cross-modal bias decreased as the degree of visual-auditory disparity increased. Consequently, the greatest visual-auditory biases were obtained with small disparities at central locations. In all cases, the magnitude of these biases covaried with judgments of spatial unity. The results suggest that functional properties of the visual system play the predominant role in determining these visual-auditory interactions and that cross-modal biases can be substantially greater than previously noted.
Many neurons in the superior colliculus (SC) integrate sensory information from multiple modalities, giving rise to significant response enhancements. Although enhanced multisensory responses have been shown to depend on the spatial and temporal relationships of the stimuli as well as on their relative effectiveness, these factors alone do not appear sufficient to account for the substantial heterogeneity in the magnitude of the multisensory products that have been observed. Toward this end, the present experiments have revealed that there are substantial differences in the operations used by different multisensory SC neurons to integrate their cross-modal inputs, suggesting that intrinsic differences in these neurons may also play an important deterministic role in multisensory integration. In addition, the integrative operation employed by a given neuron was found to be well correlated with the neuron's dynamic range. In total, four categories of SC neurons were identified based on how their multisensory responses changed relative to the predicted addition of the two unisensory inputs as stimulus effectiveness was altered. Despite the presence of these categories, a general rule was that the most robust multisensory enhancements were seen with combinations of the least effective unisensory stimuli. Together, these results provide a better quantitative picture of the integrative operations performed by multisensory SC neurons and suggest mechanistic differences in the way in which these neurons synthesize cross-modal information.
Although there are many perceptual theories that posit particular maturational profiles in higher-order (i.e., cortical) multisensory regions, our knowledge of multisensory development is primarily derived from studies of a midbrain structure, the superior colliculus. Therefore, the present study examined the maturation of multisensory processes in an area of cat association cortex [i.e., the anterior ectosylvian sulcus (AES)] and found that these processes are rudimentary during early postnatal life and develop only gradually thereafter. The AES comprises separate visual, auditory, and somatosensory regions, along with many multisensory neurons at the intervening borders between them. During early life, sensory responsiveness in AES appears in an orderly sequence. Somatosensory neurons are present at 4 weeks of age and are followed by auditory and multisensory (somatosensory-auditory) neurons. Visual neurons and visually responsive multisensory neurons are first seen at 12 weeks of age. The earliest multisensory neurons are strikingly immature, lacking the ability to synthesize the cross-modal information they receive. With postnatal development, multisensory integrative capacity matures. The delayed maturation of multisensory neurons and multisensory integration in AES suggests that the higher-order processes dependent on these circuits appear comparatively late in ontogeny.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.