The brain adapts to asynchronous audiovisual signals by reducing the subjective temporal lag between them. However, it is currently unclear which sensory signal (visual or auditory) shifts toward the other. According to the idea that the auditory system codes temporal information more precisely than the visual system, one should expect to find some temporal shift of vision toward audition (as in the temporal ventriloquism effect) as a result of adaptation to asynchronous audiovisual signals. Given that visual information gives a more exact estimate of the time of occurrence of distal events than auditory information (due to the fact that the time of arrival of visual information regarding an external event is always closer to the time at which this event occurred), the opposite result could also be expected. Here, we demonstrate that participants' speeded reaction times (RTs) to auditory (but, critically, not visual) stimuli are altered following adaptation to asynchronous audiovisual stimuli. After receiving ''baseline'' exposure to synchrony, participants were exposed either to auditory-lagging asynchrony (VA group) or to auditory-leading asynchrony (AV group). The results revealed that RTs to sounds became progressively faster (in the VA group) or slower (in the AV group) as participants' exposure to asynchrony increased, thus providing empirical evidence that speeded responses to sounds are influenced by exposure to audiovisual asynchrony.audition ͉ perception ͉ vision ͉ time ͉ recalibration
The role attention plays in our experience of a coherent, multisensory world is still controversial. On the one hand, a subset of inputs may be selected for detailed processing and multisensory integration in a top-down manner, i.e., guidance of multisensory integration by attention. On the other hand, stimuli may be integrated in a bottom-up fashion according to low-level properties such as spatial coincidence, thereby capturing attention. Moreover, attention itself is multifaceted and can be described via both top-down and bottom-up mechanisms. Thus, the interaction between attention and multisensory integration is complex and situation-dependent. The authors of this opinion paper are researchers who have contributed to this discussion from behavioural, computational and neurophysiological perspectives. We posed a series of questions, the goal of which was to illustrate the interplay between bottom-up and top-down processes in various multisensory scenarios in order to clarify the standpoint taken by each author and with the hope of reaching a consensus. Although divergence of viewpoint emerges in the current responses, there is also considerable overlap: In general, it can be concluded that the amount of influence that attention exerts on MSI depends on the current task as * Equal contribution (ordered alphabetically). ** Equal contribution. *** To whom correspondence should be addressed. E-mail: hartcher@isir.upmc.fr; Ruth.Adam@med.uni-muenchen.de Macaluso et al. / Multisensory Research (2016) well as prior knowledge and expectations of the observer. Moreover stimulus properties such as the reliability and salience also determine how open the processing is to influences of attention.
Often multisensory information is integrated in a statistically optimal fashion where each sensory source is weighted according to its precision. This integration scheme is statistically optimal because it theoretically results in unbiased perceptual estimates with the highest precision possible. There is a current lack of consensus about how the nervous system processes multiple sensory cues to elapsed time. In order to shed light upon this, we adopt a computational approach to pinpoint the integration strategy underlying duration estimation of audio/visual stimuli. One of the assumptions of our computational approach is that the multisensory signals redundantly specify the same stimulus property. Our results clearly show that despite claims to the contrary, perceived duration is the result of an optimal weighting process, similar to that adopted for estimates of space. That is, participants weight the audio and visual information to arrive at the most precise, single duration estimate possible. The work also disentangles how different integration strategies – i.e. considering the time of onset/offset of signals - might alter the final estimate. As such we provide the first concrete evidence of an optimal integration strategy in human duration estimates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.