2017
DOI: 10.1109/tip.2017.2722238
|View full text |Cite
|
Sign up to set email alerts
|

Visual Attention Saccadic Models Learn to Emulate Gaze Patterns From Childhood to Adulthood

Abstract: How people look at visual information reveals fundamental information about themselves, their interests and their state of mind. While previous visual attention models output static 2D saliency maps, saccadic models aim to predict not only where observers look at but also how they move their eyes to explore the scene. In this paper, we demonstrate that saccadic models are a flexible framework that can be tailored to emulate observer's viewing tendencies. More specifically, we use fixation data from 101 observe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
19
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
8
2

Relationship

3
7

Authors

Journals

citations
Cited by 41 publications
(21 citation statements)
references
References 66 publications
1
19
0
1
Order By: Relevance
“…For instance, such a model can predict the probability of a sequence (here, a scanpath) belonging to a certain experimental group (e.g., Voisin, Yoon, Tourassi, Morin-Ducote, & Hudson, 2013). It can be used to generate scanpaths (e.g., saccadic models: Le Meur & Coutrot, 2016a;Le Meur et al, 2017a), and it can also prove useful in studying bottomup and top-down visual attention processes (Rai, Le Callet, & Cheung, 2016;Coutrot et al, 2018). In this section we take on the task of classifying the types (three classes; Model Type 1), types and sizes (nine classes; Type 2) and sizes of scotomas (four classes;…”
Section: Classifier Modelsmentioning
confidence: 99%
“…For instance, such a model can predict the probability of a sequence (here, a scanpath) belonging to a certain experimental group (e.g., Voisin, Yoon, Tourassi, Morin-Ducote, & Hudson, 2013). It can be used to generate scanpaths (e.g., saccadic models: Le Meur & Coutrot, 2016a;Le Meur et al, 2017a), and it can also prove useful in studying bottomup and top-down visual attention processes (Rai, Le Callet, & Cheung, 2016;Coutrot et al, 2018). In this section we take on the task of classifying the types (three classes; Model Type 1), types and sizes (nine classes; Type 2) and sizes of scotomas (four classes;…”
Section: Classifier Modelsmentioning
confidence: 99%
“…Depending on the task observers have to perform, the gaze deployment is significantly altered. Beyond the task at hand, top-down influences are also related to observers' experience as well as their own characteristics such as age [4,5] and their cultural experiences [6].…”
Section: Introductionmentioning
confidence: 99%
“…New methods and approaches are required to detect anomaly in UAV footages and to ease the decision-making. Among them, we believe that computational models of visual attention could be used to simulate operators' behaviors [25]. Eventually, thanks to predictions, operators' workloads can be reduced by eliminating unnecessary footages segments.…”
Section: Related Workmentioning
confidence: 99%