2013
DOI: 10.3389/fpsyg.2013.00528
|View full text |Cite
|
Sign up to set email alerts
|

An amodal shared resource model of language-mediated visual attention

Abstract: Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual Worl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
20
0

Year Published

2014
2014
2017
2017

Publication Types

Select...
6
1

Relationship

6
1

Authors

Journals

citations
Cited by 10 publications
(21 citation statements)
references
References 60 publications
1
20
0
Order By: Relevance
“…However, an alternative conception of the role of semantics is that it interacts with written and spoken forms via a central resource, as shown in Figure 1C (Dilkina, McClelland, & Plaut, 2008;Smith, Monaghan, & Huettig, 2013). Central resource models propose that semantics begins to be activated as quickly as the phonological form of a word and affects its visual identification (both for regular and exception words), echoing neuroimaging findings (e.g., Hauk et al, 2012).…”
Section: Computational Modellingmentioning
confidence: 92%
“…However, an alternative conception of the role of semantics is that it interacts with written and spoken forms via a central resource, as shown in Figure 1C (Dilkina, McClelland, & Plaut, 2008;Smith, Monaghan, & Huettig, 2013). Central resource models propose that semantics begins to be activated as quickly as the phonological form of a word and affects its visual identification (both for regular and exception words), echoing neuroimaging findings (e.g., Hauk et al, 2012).…”
Section: Computational Modellingmentioning
confidence: 92%
“…To represent such a system, we use the Multimodal Integration Model (MIM) of language processing which integrates concurrent phonological, semantic and visual information in parallel during spoken word processing (Smith, Monaghan, & Huettig, 2013, 2014a, 2014b; see also Monaghan & Nazir, 2009). The model is derived from the Hub-and-Spoke framework (Dilkina et al, 2008(Dilkina et al, , 2010Plaut, 2002;Rogers et al, 2004), a single system architecture that consists of a central resource (hub) that integrates and translates information between multiple modality specific sources (spokes).…”
Section: Models Of Multimodal Integration During Speech Processingmentioning
confidence: 99%
“…It has since been demonstrated that alignment models are also capable of generating rhyme competitor effects if they are exposed to noise in the learning environment, such that onset information is not always a perfect predictor of the target word (Magnuson, Tanenhaus, & Aslin, 2000;Magnuson, Tanenhaus, Aslin, & Dahan, 2003;Smith et al, 2013). Evidence to support such predictions is provided by recent visual world data that demonstrates that onset and rhyme effects on language mediated eye gaze can be modulated by the level of noise participants are exposed to in the speech signal (McQueen & Huettig, 2012).…”
Section: Models Of Multimodal Integration During Speech Processingmentioning
confidence: 99%
“…The neural network model used within this paper is based on the ASR model of language mediated eye gaze presented in Smith et al (2013a). The same network architecture (see Fig.…”
Section: Architecturementioning
confidence: 99%