2013
DOI: 10.1037/a0028646
|View full text |Cite
|
Sign up to set email alerts
|

Parallel object activation and attentional gating of information: Evidence from eye movements in the multiple object naming paradigm.

Abstract: Do we access information from any object we can see, or do we only access information from objects that we intend to name? In three experiments using a modified multiple object naming paradigm, subjects were required to name several objects in succession when previews appeared briefly and simultaneously in the same location as the target as well as at another location. In Experiment 1, preview benefit—faster processing of the target when the preview was related (a mirror image of the target) compared to unrela… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
19
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(20 citation statements)
references
References 38 publications
1
19
0
Order By: Relevance
“…This is a technique in which, during the saccade from one object to the next, the object on which the saccade would have landed (the interloper) is replaced by a different object (the target). It has been observed that gaze durations on the target were shorter when the target and the interloper were identical, or each other's mirror image, or associated with the same name than when target and interloper were unrelated Morgan and Meyer, 2005;Schotter et al, 2013). This suggests that speakers processed the interloper prior to fixating on its location.…”
Section: Introductionmentioning
confidence: 84%
“…This is a technique in which, during the saccade from one object to the next, the object on which the saccade would have landed (the interloper) is replaced by a different object (the target). It has been observed that gaze durations on the target were shorter when the target and the interloper were identical, or each other's mirror image, or associated with the same name than when target and interloper were unrelated Morgan and Meyer, 2005;Schotter et al, 2013). This suggests that speakers processed the interloper prior to fixating on its location.…”
Section: Introductionmentioning
confidence: 84%
“…Due to this brief presentation, the distracter pictures might not have been processed up to the lexical level, thereby yielding facilitation rather than interference. This ties in with other studies finding facilitation (preview benefits) from visually related distractor pictures (e.g., Pollatsek, Rayner, & Collins, 1984;Schotter, Ferreira, & Rayner, 2013). In fact, it has been argued that the small effects of facilitation stemming from visualconceptual overlap can be overruled by the stronger interference effects stemming from lexical-semantic overlap (e.g., Abdel Rahman & Melinger, 2009;Aristei et al, 2012, Navarrete & Costa, 2005.…”
Section: Discussionmentioning
confidence: 57%
“…When they shifted their eye gaze to the second picture, this picture was replaced with a new picture. Facilitation was found when the old and new picture were identical or homophonous, suggesting that the second (old) picture was processed while the participants were still looking at the first picture (see also Mädebach, Jescheniak, Oppermann, & Schriefers, 2011;Malpass & Meyer, 2010;Morgan, Van Elswijk, & Meyer, 2008;Schotter, Ferreira, & Rayner, 2013).…”
Section: Introductionmentioning
confidence: 99%
“…When speaking about objects in our environment, planning ahead can be measured by a subject’s ability to pre-process upcoming, to-be-named objects before looking at them (Meyer & Dobel, 2003; Meyer, Ouellet, & Hacker, 2008; Morgan & Meyer, 2005; Morgan, van Elswijk & Meyer, 2008; Pollatsek, Rayner, & Collins, 1984; Schotter, Ferreira & Rayner, 2013; for reviews see Meyer, 2004; Schotter, 2011). This ability for simultaneous processing raises the issue of how processing and management of information from multiple objects is achieved.…”
mentioning
confidence: 99%