2018
DOI: 10.1167/18.6.10
|View full text |Cite
|
Sign up to set email alerts
|

Meaning guides attention in real-world scene images: Evidence from eye movements and meaning maps

Abstract: We compared the influence of meaning and of salience on attentional guidance in scene images. Meaning was captured by “meaning maps” representing the spatial distribution of semantic information in scenes. Meaning maps were coded in a format that could be directly compared to maps of image salience generated from image features. We investigated the degree to which meaning versus image salience predicted human viewers' spatiotemporal distribution of attention over scenes. Extending previous work, here the distr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

13
86
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 98 publications
(100 citation statements)
references
References 66 publications
13
86
1
Order By: Relevance
“…Scene semantics is known to play an important role in guiding attention (Cerf et al, 2009;Henderson, 2003;Henderson & Hayes, 2017, 2018Wu et al, 2014;Xu et al, 2014) and in forming scene memory (Isola et al, 2011). Consistently, we found that two proxies of scene semantics in this study, MOPS and the presence of face/human, were positively associated with scene memorability.…”
Section: Discussionsupporting
confidence: 87%
See 1 more Smart Citation
“…Scene semantics is known to play an important role in guiding attention (Cerf et al, 2009;Henderson, 2003;Henderson & Hayes, 2017, 2018Wu et al, 2014;Xu et al, 2014) and in forming scene memory (Isola et al, 2011). Consistently, we found that two proxies of scene semantics in this study, MOPS and the presence of face/human, were positively associated with scene memorability.…”
Section: Discussionsupporting
confidence: 87%
“…Unfortunately, however, which scene features can contribute to the difference in fixation map consistency across scenes, especially in the first 2 s, is less well known. Our results suggest that such scene features could include highly meaningful features like faces and people, which can guide overt attention from the very first fixation (Henderson & Hayes, 2017, 2018. Understanding which scene features contribute to producing more consistent fixation maps early in viewing and how these features contribute to scene encoding will be critical for predicting both fixation patterns and scene memorability.…”
Section: Discussionmentioning
confidence: 83%
“…Given this relationship, the only way to unambiguously demonstrate an influence of salience over meaning is to de-correlate them. And as we have shown, when this is done statistically, there is little evidence for an influence of image salience independent of meaning both in the present task and in scene memorization and aesthetic judgement tasks 31 , 58 .…”
Section: Discussionmentioning
confidence: 54%
“…However, the power of purely bottom-up saliency-based models to predict naturalistic viewing is limited, with other work suggesting that endogenous features like task instructions [e.g., "estimate the ages of the people in the painting", 2, see also, 3], prior knowledge [e.g., an octopus does not belong in a barnyard scene 4], and viewing biases [e.g., the tendency to view faces and text, 5,see also, 6] can also be used to predict gaze allocation and to improve the performance of saliency-based models [6-8,for review, see 9]. The combined influence of these cognitive factors on viewing can be summed into "meaning maps", an analogue to saliency maps generated by crowd sourcing ratings of "meaningfulness" (informativeness + recognizability) for each region of a scene [10]. When compared directly, meaning maps significantly outperform saliency maps in predicting eye movements during naturalistic scene viewing, suggesting that visual saliency alone is insufficient to model human gaze behavior.…”
Section: Eye Movements and Memory Encodingmentioning
confidence: 99%