The perception of simultaneity between auditory and visual information is of crucial importance for maintaining a coordinated representation of a multisensory event. Here we show that the perceptual system is able to adaptively recalibrate itself to audio-visual temporal asynchronies. Participants were exposed to a train of sounds and light flashes with a constant time lag ranging from -200 (sound first) to +200 ms (light first). Following this exposure, a temporal order judgement (TOJ) task was performed in which a sound and light were presented with a stimulus onset asynchrony (SOA) chosen from 11 values between -240 and +240 ms. Participants either judged whether the sound or the light was presented first, or whether the sound and light were presented simultaneously or successively. The point of subjective simultaneity (PSS) was, in both cases, shifted in the direction of the exposure lag, indicative of recalibration.
The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure to spatially incongruent inputs from different sensory modalities have not been demonstrated so far for identity incongruence. We show that exposure to incongruent audiovisual speech (producing the well-known McGurk effect) can recalibrate auditory speech identification. In Experiment 1, exposure to an ambiguous sound intermediate between /aba/ and /ada/ dubbed onto a video of a face articulating either /aba/ or /ada/ increased the proportion of /aba/ or /ada/ responses, respectively, during subsequent sound identification trials. Experiment 2 demonstrated the same recalibration effect or the opposite one, fewer /aba/ or /ada/ responses, revealing selective speech adaptation, depending on whether the ambiguous sound or a congruent nonambiguous one was used during exposure. In separate forced-choice identification trials, bimodal stimulus pairs producing these contrasting effects were identically categorized, which makes a role of postperceptual factors in the generation of the effects unlikely.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.