2011
DOI: 10.1016/j.actpsy.2010.09.008
|View full text |Cite
|
Sign up to set email alerts
|

On the temporal dynamics of language-mediated vision and vision-mediated language

Abstract: Recent converging evidence suggests that language and vision interact immediately in non-trivial ways, although the exact nature of this interaction is still unclear. Not only does linguistic information influence visual perception in real-time, but visual information also influences language comprehension in real-time. For example, in visual search tasks, incremental spoken delivery of the target features (e.g., "Is there a red vertical?") can increase the efficiency of conjunction search because only one fea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(21 citation statements)
references
References 98 publications
0
21
0
Order By: Relevance
“…In general terms, it would appear then that a model in which "utterance meaning, scene information, and linguistic expectation are representationally indistinguishable and reside within a unitary system that learns, represents, and processes language and the world" would fail to explain our results (Altmann & Mirković, 2009, p. 593). They are, by contrast, compatible with models that postulate a rapid interaction between linguistic and non-linguistic information (e.g., Anderson et al, 2011;Tanenhaus et al, 1995).…”
Section: Implications For Models Of Picture-sentence Processingmentioning
confidence: 68%
“…In general terms, it would appear then that a model in which "utterance meaning, scene information, and linguistic expectation are representationally indistinguishable and reside within a unitary system that learns, represents, and processes language and the world" would fail to explain our results (Altmann & Mirković, 2009, p. 593). They are, by contrast, compatible with models that postulate a rapid interaction between linguistic and non-linguistic information (e.g., Anderson et al, 2011;Tanenhaus et al, 1995).…”
Section: Implications For Models Of Picture-sentence Processingmentioning
confidence: 68%
“…In other words, what people currently have in mind can affect what they attend to later. Importantly, these memorial cues do not have to be spatial, as linguistic information currently held in working memory has also recently been shown to determine the spatial deployment of visual attention (e.g., Soto and Humphreys, 2007; Hodgson et al, 2009; Mannan et al, 2010; Anderson et al, 2011; Salverda and Altmann, 2011). Our data suggest that once the attentional cue is established in the speaker’s working memory, irrespective of whether it was established with the help of a pointer or a referent preview, this attentional cue biases the speaker to select the referent that later appears in the cued location as the sentential Subject.…”
Section: Discussionmentioning
confidence: 99%
“…Such multimodal interactions within the speaker and listener have been shown to be vital for language development (Markman, 1994; Bloom, 2000; Monaghan and Mattock, 2012; Mani et al, 2013) as well as for adult sentence and discourse processing (Anderson et al, 2011; Huettig et al, 2011b; Lupyan, 2012). Eye gaze has been used to demonstrate the nature of the processes supporting online integration of linguistic and visual information (Halberda, 2006; Huettig et al, 2011a).…”
Section: Integrative Processing In a Model Of Language-mediated Visuamentioning
confidence: 99%