Vision is thought to support the development of spatial abilities in the other senses. If this is true, how does spatial hearing develop in people lacking visual experience? We comprehensively addressed this question by investigating auditory-localization abilities in 17 congenitally blind and 17 sighted individuals using a psychophysical minimum-audible-angle task that lacked sensorimotor confounds. Participants were asked to compare the relative position of two sound sources located in central and peripheral, horizontal and vertical, or frontal and rear spaces. We observed unequivocal enhancement of spatial-hearing abilities in congenitally blind people, irrespective of the field of space that was assessed. Our results conclusively demonstrate that visual experience is not a prerequisite for developing optimal spatial-hearing abilities and that, in striking contrast, the lack of vision leads to a general enhancement of auditory-spatial skills.
Humans seamlessly extract and integrate the emotional content delivered by the face and the voice of others. It is however poorly understood how perceptual decisions unfold in time when people discriminate the expression of emotions transmitted using dynamic facial and vocal signals, as in natural social context. In this study, we relied on a gating paradigm to track how the recognition of emotion expressions across the senses unfold over exposure time. We first demonstrate that across all emotions tested, a discriminatory decision is reached earlier with faces than with voices.Importantly, multisensory stimulation consistently reduced the required accumulation of perceptual evidences needed to reach correct discrimination (Isolation Point). We also observed that expressions with different emotional content provide cumulative evidence at different speeds, with "fear" being the expression with the fastest isolation point across the senses. Finally, the lack of correlation between the confusion patterns in response to facial and vocal signals across time suggest distinct relations between the discriminative features extracted from the two signals. All together, these results provide a comprehensive view on how auditory, visual and audiovisual information related to different emotion expressions accumulate in time, highlighting how multisensory context can fasten the discrimination process when minimal information is available.
Vision is thought to scaffold the development of spatial abilities in the other senses. How does spatial hearing therefore develop in people lacking visual experience? We comprehensively addressed this question by investigating auditory localization abilities in 17 congenitally blind and 17 sighted individuals using a psychophysical minimum audible angle task exempt of sensori-motor confounds. Participants were asked to compare the relative position of two sound sources located in central and peripheral, horizontal and vertical, frontal and rear spaces. We observed unequivocal enhancement of spatial hearing abilities in congenitally blind people, irrespective of the field of space that is assessed. Our results are conclusive in demonstrating that visual experience is not a mandatory prerequisite for developing optimal spatial hearing abilities and that, in striking contrast, the lack of vision leads to ubiquitous enhancement of auditory spatial skills.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.