2015
DOI: 10.1016/j.neunet.2015.01.004
|View full text |Cite
|
Sign up to set email alerts
|

Attention modeled as information in learning multisensory integration

Abstract: Top-down cognitive processes affect the way bottom-up cross-sensory stimuli are integrated. In this paper, we therefore extend a successful previous neural network model of learning multisensory integration in the superior colliculus (SC) by top-down, attentional input and train it on different classes of cross-modal stimuli. The network not only learns to integrate cross-modal stimuli, but the model also reproduces neurons specializing in different combinations of modalities as well as behavioral and neurophy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 57 publications
0
10
0
Order By: Relevance
“…The reliance on visual rather than auditory input among older adults was due to the decreased sensitivity of their peripheral and central auditory system (Freigang et al, 2014 ). Recent studies further reported optimal binding of information from different modalities was not innate but rather learned gradually from experience (Bauer et al, 2015 ; Hecht and Gepperth, 2015 ). Greater reliance on visual rather than auditory stimuli was evident from the significantly higher accuracy rates in the unisensory visual condition compared to the auditory condition in the spatial discrimination task in the older group ( P = 0.005) but not in the younger group ( P = 0.834).…”
Section: Discussionmentioning
confidence: 99%
“…The reliance on visual rather than auditory input among older adults was due to the decreased sensitivity of their peripheral and central auditory system (Freigang et al, 2014 ). Recent studies further reported optimal binding of information from different modalities was not innate but rather learned gradually from experience (Bauer et al, 2015 ; Hecht and Gepperth, 2015 ). Greater reliance on visual rather than auditory stimuli was evident from the significantly higher accuracy rates in the unisensory visual condition compared to the auditory condition in the spatial discrimination task in the older group ( P = 0.005) but not in the younger group ( P = 0.834).…”
Section: Discussionmentioning
confidence: 99%
“…In our approach, such parameters are implicitly estimated from data using a generative learning algorithm (SOM). Then, in related modeling approaches on multisensory fusion [3], it is assumed that the distribution of the underlying "true" stimulus r, p(r) is uniform and unbounded : p(r) ∼ U(∞, ∞), and that observations s i for each sensor are obtained from r by adding Gaussian noise with a variance that is known for each sensor. Individual observations are therefore class-conditionally independent.…”
Section: Discussionmentioning
confidence: 99%
“…In this article we present an argument for the latter case since it stands to reason that multisensory integration in biological systems is not generally innate but learned [3]. This ability seems to be gradually acquired in the course of development, and then refined and maintained throughout a life-span which would obviously be desirable for intelligent agents to reproduce.…”
Section: Introductionmentioning
confidence: 98%
“…This learning feature is realized by employing articial intelligence, namely machine learning and vis-à-vis articial neural networks. 145,166 The choice of ANNs to model so many different systems is, in part due to their exibility, adaptability and generalization capabilities and their easy application in soware and hardware devices 138 and materials. 144 Proposal for application of ANNs and CNT/concrete composites in structural health monitoring…”
Section: Smart Materialsmentioning
confidence: 99%