2013
DOI: 10.1155/2013/475427
|View full text |Cite
|
Sign up to set email alerts
|

A Neural Network Model Can Explain Ventriloquism Aftereffect and Its Generalization across Sound Frequencies

Abstract: Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
6
1
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 43 publications
0
9
0
Order By: Relevance
“…While some find that this after-effect pertains only to the frequency of the tone of the trained audio-visual pair, suggesting frequency-dependent spatial representation, others find that it affects tones of other, distinct frequencies, suggesting frequency invariant spatial representation (Recanzone, 1998 ; Lewald, 2002 ; Frissen et al, 2003 , 2005 ; Woods and Recanzone, 2004 ). A recent computational model suggested that the test sound intensity may determine the amount of spectral generalization in this paradigm (Magosso et al, 2013 ).…”
Section: Discussionmentioning
confidence: 99%
“…While some find that this after-effect pertains only to the frequency of the tone of the trained audio-visual pair, suggesting frequency-dependent spatial representation, others find that it affects tones of other, distinct frequencies, suggesting frequency invariant spatial representation (Recanzone, 1998 ; Lewald, 2002 ; Frissen et al, 2003 , 2005 ; Woods and Recanzone, 2004 ). A recent computational model suggested that the test sound intensity may determine the amount of spectral generalization in this paradigm (Magosso et al, 2013 ).…”
Section: Discussionmentioning
confidence: 99%
“…In a series of recent studies (Magosso et al ., , ; Cuppini et al ., ), we constructed a biologically plausible neural network, which incorporates two chains of unisensory neurons (one auditory and one visual) linked via cross‐modal synapses. With this model, we were able to demonstrate that illusory phenomena crucially depend on the cross‐modal synapse weights, which implement a prior on the co‐occurrence of the stimuli.…”
Section: Introductionmentioning
confidence: 99%
“…Here, we propose a model that considers the interaction between cortical and subcortical structures (i.e., the Superior Colliculus) in mediating visual-auditory perceptual phenomena. The model represents an extension of our previous models (Magosso et al, 2008;Cuppini et al, 2012;Magosso et al, 2012;Magosso et al, 2013;Cuppini et al, 2014). Some main advancements can be highlighted.…”
Section: Discussionmentioning
confidence: 97%
“…In recent years, we proposed several neurocomputational models to investigate different aspects of visual-auditory integration (Magosso, Cuppini, Serino, Di Pellegrino and Ursino, 2008;Cuppini, Magosso, Rowland, Stein and Ursino, 2012;Magosso, Cuppini and Ursino, 2012;Magosso, Cona and Ursino, 2013;Cuppini, Magosso, Bolognini, Vallar and Ursino, 2014). In particular, some of those models were devoted to investigate the properties of single neurons in the SC, neglecting aspects of multisensory interaction in the cortex (Magosso et al, 2008;Cuppini et al, 2012); others focused only on visual-auditory integration in the cortex, not including subcortical structures (Magosso et al, 2012, Magosso et al, 2013, Cuppini et al, 2014. Moreover, none of them investigates the mechanisms underlying multisensory perceptual effects in brain damaged patients (such as hemianopic patients).…”
Section: Introductionmentioning
confidence: 99%