2007
DOI: 10.1016/j.visres.2007.06.015
|View full text |Cite
|
Sign up to set email alerts
|

Predicting visual fixations on video based on low-level visual features

Abstract: To what extent can a computational model of the bottom-up visual attention predict what an observer is looking at? What is the contribution of the low-level visual features in the attention deployment? To answer these questions, a new spatio-temporal computational model is proposed. This model incorporates several visual features; therefore, a fusion algorithm is required to combine the different saliency maps (achromatic, chromatic and temporal). To quantitatively assess the model performances, eye movements … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

9
199
1
3

Year Published

2011
2011
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 282 publications
(212 citation statements)
references
References 32 publications
9
199
1
3
Order By: Relevance
“…Our goal is now to extract the local motion in video frames i.e. residual motion with regard to model (4). We denote the macro-block optical flow motion vector V c (I, i).…”
Section: Temporal Saliency Mapmentioning
confidence: 99%
See 1 more Smart Citation
“…Our goal is now to extract the local motion in video frames i.e. residual motion with regard to model (4). We denote the macro-block optical flow motion vector V c (I, i).…”
Section: Temporal Saliency Mapmentioning
confidence: 99%
“…Numerous psycho-visual studies which have been conducted since the last quarter of 20th century uncovered some factors influencing it. Considering only signal features, the sensitivity to color contrasts, contours, orientation and motion observed in image plane has been stated by numerous authors [3,4]. Nevertheless, only these features are not sufficient to delimit the area in the image plane which is the strongest gaze attractor.…”
Section: Introductionmentioning
confidence: 99%
“…Complex saliency models like the ones by Itti et al [3] or le Meur [9] are close to the perception of the human visual system. In some cases, however, they may be too general, because they rely on pure bottom-up information.…”
Section: Still Image Saliency Modulementioning
confidence: 83%
“…The applied visual attention model [9] considers color contrast, visual masking effects and orientation features. Furthermore hierarchical block matching, a 2D affine motion model and M-estimator regression is used to determine temporal saliency.…”
mentioning
confidence: 99%
“…The early work is in [25], Itti et al derived saliency map by means of calculating the images' pixel intensity, orientation and color's contrast. Then many improved algorithms emerged in large numbers, like the method based on natural statistics in [26], the algorithm which mainly utilized the low-level features in [27], the approach which considered the spectral residual in [28], etc. Some VQA methods also viewed saliency as a powerful tool.…”
Section: A Saliency Mapmentioning
confidence: 99%