2017
DOI: 10.1007/s11042-016-4294-1
|View full text |Cite
|
Sign up to set email alerts
|

Video attention prediction using gaze saliency

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 37 publications
0
2
0
Order By: Relevance
“…Bottom-up approaches employ low-level image features, such as intensity, color, and orientation to predict visual attention, while top-down approaches take into consideration high-level features of the scene (such as object/human relationships) and the context. Hybrid approaches combine low-level and high-level features to improve performance, allowing for recent applications to predict video attention during natural interactions of the user with a smartphone [ 86 ]. With advances in deep learning, new saliency-based gaze tracking models are continuously proposed, but they are most exploited to learn salient regions in order to predict eye fixations and then eliminate explicit personal calibration while using PCCR based eye trackers [ 87 ].…”
Section: Gaze Tracking By Scene Analysismentioning
confidence: 99%
“…Bottom-up approaches employ low-level image features, such as intensity, color, and orientation to predict visual attention, while top-down approaches take into consideration high-level features of the scene (such as object/human relationships) and the context. Hybrid approaches combine low-level and high-level features to improve performance, allowing for recent applications to predict video attention during natural interactions of the user with a smartphone [ 86 ]. With advances in deep learning, new saliency-based gaze tracking models are continuously proposed, but they are most exploited to learn salient regions in order to predict eye fixations and then eliminate explicit personal calibration while using PCCR based eye trackers [ 87 ].…”
Section: Gaze Tracking By Scene Analysismentioning
confidence: 99%
“…Saliency prediction can be categorized into three general categories: bottom-up approaches based on low-level features such as color, contrastness, texture, etc. [7,18,30,[45][46][47]; top-down approaches based on high-level image features such as object knowledge [13,16,38]; and the combination of the two [6,44]. Development in deep learning boosts the performance of saliency prediction, where saliency datasets used for training and benchmark (SALICON [17], MIT300 [4], etc.)…”
Section: Dnns For Saliency Predictionmentioning
confidence: 99%