2009
DOI: 10.1167/9.12.10
|View full text |Cite
|
Sign up to set email alerts
|

Faces and text attract gaze independent of the task: Experimental data and computer model

Abstract: Previous studies of eye gaze have shown that when looking at images containing human faces, observers tend to rapidly focus on the facial regions. But is this true of other high-level image features as well? We here investigate the extent to which natural scenes containing faces, text elements, and cell phones-as a suitable control-attract attention by tracking the eye movements of subjects in two types of tasks-free viewing and search. We observed that subjects in free-viewing conditions look at faces and tex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

21
298
1
1

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 332 publications
(321 citation statements)
references
References 33 publications
21
298
1
1
Order By: Relevance
“…Our baseline behavioral data show that faces capture attention in the rhesus macaque in much the same way as they do in humans (47)(48)(49). Task irrelevant faces interfere with task performance and attentional capture is greater for emotional expressions such as fear and threat than for neutral faces.…”
Section: Discussionmentioning
confidence: 56%
“…Our baseline behavioral data show that faces capture attention in the rhesus macaque in much the same way as they do in humans (47)(48)(49). Task irrelevant faces interfere with task performance and attentional capture is greater for emotional expressions such as fear and threat than for neutral faces.…”
Section: Discussionmentioning
confidence: 56%
“…There are two types of computational models for saliency depending on what the model is driven by: a bottom-up saliency using low-level features (e.g. contrast) [ that the latter attract the human gaze independently of the assigned task [CFK09].…”
Section: Human Perceptionmentioning
confidence: 99%
“…Future research can also draw on animal models and human neurogenetics to elucidate the role of serotonin, as well as other neuromodulators, such as dopamine and norepinephrine, in affect-biased attention. In addition, whereas a number of computational models of bottom-up visual salience predict attention deployment based on low-level visual features (e.g., [21]) and even incorporate aspects of semantic meaning [66], future models can incorporate affective salience parameters to predict attention deployment.…”
Section: Future Directionsmentioning
confidence: 99%