1983
DOI: 10.1037/0278-7393.9.3.411
|View full text |Cite
|
Sign up to set email alerts
|

Scene perception: A failure to find a benefit from prior expectancy or familiarity.

Abstract: In our everyday world, we typically have an expectancy as to the kinds of scenes that we will see from one glance to the next. Also, many of the scenes that we do see are familiar in the sense that they have been experienced before. Do these factors influence the perception of a scene? In three experiments, priming subjects with a verbal descriptor of a scene was not found to improve reliably the perception of that scene as assessed by the speed and accuracy of detecting an incongruity between an object and it… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
35
0
1

Year Published

1991
1991
2019
2019

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(36 citation statements)
references
References 25 publications
0
35
0
1
Order By: Relevance
“…However, claims about top-down effects have often taken a strong form, in terms of mechanisms such as knowledgedriven hypothesis testing. For example, one hypothesis was that exposure to a scene's name (the semanticlevel) can expedite visual recognition of objects within the scene (see, e.g., Biederman, Teitelbaum, & Mezzanotte, 1983; see also, e.g., Henderson, 1977;Rumelhart, 1977). Most experiments have not been consistent with top-down hypotheses (see, e.g., Biederman et al, 1983;Johnston, 1978); consequently, few current models include topdown mechanisms.…”
mentioning
confidence: 94%
“…However, claims about top-down effects have often taken a strong form, in terms of mechanisms such as knowledgedriven hypothesis testing. For example, one hypothesis was that exposure to a scene's name (the semanticlevel) can expedite visual recognition of objects within the scene (see, e.g., Biederman, Teitelbaum, & Mezzanotte, 1983; see also, e.g., Henderson, 1977;Rumelhart, 1977). Most experiments have not been consistent with top-down hypotheses (see, e.g., Biederman et al, 1983;Johnston, 1978); consequently, few current models include topdown mechanisms.…”
mentioning
confidence: 94%
“…However, the idea typically has been instantiated in terms of fairly strong mechanisms, such as top-down effects of prior knowledge and interactions between distinct processing levels (e.g., Henderson, 1977;Neisser, 1967;Rumelhart, 1977). For example, one hypothesis was that exposure to a scene's name, semantic-level identification, expedites visual-level identification of objects within the scene (e.g., Biederman, Teitelbaum, & Mezzanotte, 1983). Most of the data are inconsistent with strong top-down hypotheses (e.g., Biederman et al, 1983, Johnston, 1978, and few current models include strong topdown mechanisms.…”
Section: Contingency Hypothesismentioning
confidence: 99%
“…For example, one hypothesis was that exposure to a scene's name, semantic-level identification, expedites visual-level identification of objects within the scene (e.g., Biederman, Teitelbaum, & Mezzanotte, 1983). Most of the data are inconsistent with strong top-down hypotheses (e.g., Biederman et al, 1983, Johnston, 1978, and few current models include strong topdown mechanisms. However, this leaves a largely unexplored middle ground between the strictly bottom-up and strong top-down approaches.…”
Section: Contingency Hypothesismentioning
confidence: 99%
“…Hence, we do not expect perfect agreement between model-predicted salience and human eye position. In particular, our bottom-up model as used here does not yet account, among others, for how the rapid identification of the gist (semantic category) of a scene may provide contextual priors to more efficiently guide attention towards target objects of interest (Biederman, Teitelbaum, & Mezzanotte, 1983;Friedman, 1979;Hollingworth & Henderson, 1998;Oliva & Schyns, 1997;Potter & Levy, 1969;Torralba, 2003); how search for a specific target might be guided top-down, for example by boosting visual neurons tuned to the attributes of the target (Ito & Gilbert, 1999;Moran & Desimone, 1985;Motter, 1994;Müller, Reimann, & Krummenacher, 2003;Reynolds, Pasternak, & Desimone, 2000;Treue & Maunsell, 1996;Treue & Trujillo, 1999;Wolfe, 1994Wolfe, , 1998Wolfe, Cave, & Franzel, 1989;Yeshurun & Carrasco, 1998); or how task, expertise, and internal scene models may influence eye movements (Henderson & Hollingworth, 2003;Moreno, Reina, Luis, & Sabido, 2002;Nodine & Krupinski, 1998;Noton & Stark, 1971;Peebles & Cheng, 2003;Savelsbergh, Williams, van der Kamp, & Ward, 2002;Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995;Yarbus, 1967). Nevertheless, our hypothesis for this study is that a more realistic simulation framework might yield better agreement between human and model than a less realistic one.…”
mentioning
confidence: 99%