Our perception of the world's three-dimensional (3D) structure is critical for object recognition, navigation and planning actions. To accomplish this, the brain combines different types of visual information about depth structure, but at present, the neural architecture mediating this combination remains largely unknown. Here, we report neuroimaging correlates of human 3D shape perception from the combination of two depth cues. We measured fMRI responses while observers judged the 3D structure of two sequentially presented images of slanted planes defined by binocular disparity and perspective. We compared the behavioral and fMRI responses evoked by changes in one or both of the depth cues. fMRI responses in extrastriate areas (hMT+/V5 and lateral occipital complex), rather than responses in early retinotopic areas, reflected differences in perceived 3D shape, suggesting 'combined-cue' representations in higher visual areas. These findings provide insight into the neural circuits engaged when the human brain combines different information sources for unified 3D visual perception.
To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception.
Rapid integration of biologically relevant information is crucial for the survival of an organism. Most prominently, humans should be biased to attend and respond to looming stimuli that signal approaching danger (e.g. predator) and hence require rapid action. This psychophysics study used binocular rivalry to investigate the perceptual advantage of looming (relative to receding) visual signals (i.e. looming bias) and how this bias can be influenced by concurrent auditory looming/receding stimuli and the statistical structure of the auditory and visual signals.Subjects were dichoptically presented with looming/receding visual stimuli that were paired with looming or receding sounds. The visual signals conformed to two different statistical structures: (1) a ‘simple’ random-dot kinematogram showing a starfield and (2) a “naturalistic” visual Shepard stimulus. Likewise, the looming/receding sound was (1) a simple amplitude- and frequency-modulated (AM-FM) tone or (2) a complex Shepard tone. Our results show that the perceptual looming bias (i.e. the increase in dominance times for looming versus receding percepts) is amplified by looming sounds, yet reduced and even converted into a receding bias by receding sounds. Moreover, the influence of looming/receding sounds on the visual looming bias depends on the statistical structure of both the visual and auditory signals. It is enhanced when audiovisual signals are Shepard stimuli.In conclusion, visual perception prioritizes processing of biologically significant looming stimuli especially when paired with looming auditory signals. Critically, these audiovisual interactions are amplified for statistically complex signals that are more naturalistic and known to engage neural processing at multiple levels of the cortical hierarchy.
In multistable perception, the brain alternates between several perceptual explanations of ambiguous sensory signals. It is unknown whether multistable processes can interact across the senses. In the study reported here, we presented subjects with unisensory (visual or tactile), spatially congruent visuotactile, and spatially incongruent visuotactile apparent motion quartets. Congruent stimulation induced pronounced visuotactile interactions, as indicated by increased dominance times for both vision and touch, and an increased percentage bias for the percept already dominant under unisensory stimulation. Thus, the joint evidence from vision and touch stabilizes the more likely perceptual interpretation and thereby decelerates the rivalry dynamics. Yet the temporal dynamics depended also on subjects' attentional focus and was generally slower for tactile than for visual reports. Our results support Bayesian approaches to perceptual inference, in which the probability of a perceptual interpretation is determined by combining visual, tactile, or visuotactile evidence with modality-specific priors that depend on subjects' attentional focus. Critically, the specificity of visuotactile interactions for spatially congruent stimulation indicates multisensory rather than cognitive-bias mechanisms.
Information integration across the senses is fundamental for effective interactions with our environment. The extent to which signals from different senses can interact in the absence of awareness is controversial. Combining the spatial ventriloquist illusion and dynamic continuous flash suppression (dCFS), we investigated in a series of two experiments whether visual signals that observers do not consciously perceive can influence spatial perception of sounds. Importantly, dCFS obliterated visual awareness only on a fraction of trials allowing us to compare spatial ventriloquism for physically identical flashes that were judged as visible or invisible. Our results show a stronger ventriloquist effect for visible than invisible flashes. Critically, a robust ventriloquist effect emerged also for invisible flashes even when participants were at chance when locating the flash. Collectively, our findings demonstrate that signals that we are not aware of in one sensory modality can alter spatial perception of signals in another sensory modality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.