Proceedings of the Symposium on Eye Tracking Research and Applications 2012
DOI: 10.1145/2168556.2168629
|View full text |Cite
|
Sign up to set email alerts
|

Incorporating visual field characteristics into a saliency map

Abstract: Characteristics of the human visual field are well known to be different in central (fovea) and peripheral areas. Existing computational models of visual saliency, however, do not take into account this biological evidence. The existing models compute visual saliency uniformly over the retina and, thus, have difficulty in accurately predicting the next gaze (fixation) point. This paper proposes to incorporate human visual field characteristics into visual saliency, and presents a computational model for produc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…The temporal evolution of fixation and saccade behavior reveal distinct patterns of eye movements upon viewing time , confirming the evidence that visual attention is an active process and its modeling involving temporality requires further investigation. Scanpath prediction could allow the reproduction of the aforementioned effects, regarding in that aspect both bottom-up and top-down processing of visual features that distinctively guide visual attention [Boccignone and Ferraro, 2004] [Kubota et al, 2012] [Chang et al, 2014] [LeMeur and Liu, 2015] [Aboudib et al, 2015] [Adeli et al, 2016] [Wang et al, 2016] [Wloka et al, 2017] [White et al, 2017. In that aspect, as saliency decreases over time, saliency evaluation measures should be done in that line.…”
Section: Discussionmentioning
confidence: 99%
“…The temporal evolution of fixation and saccade behavior reveal distinct patterns of eye movements upon viewing time , confirming the evidence that visual attention is an active process and its modeling involving temporality requires further investigation. Scanpath prediction could allow the reproduction of the aforementioned effects, regarding in that aspect both bottom-up and top-down processing of visual features that distinctively guide visual attention [Boccignone and Ferraro, 2004] [Kubota et al, 2012] [Chang et al, 2014] [LeMeur and Liu, 2015] [Aboudib et al, 2015] [Adeli et al, 2016] [Wang et al, 2016] [Wloka et al, 2017] [White et al, 2017. In that aspect, as saliency decreases over time, saliency evaluation measures should be done in that line.…”
Section: Discussionmentioning
confidence: 99%
“…Recently the models have been used to boost some computer vision and pattern recognition techniques such as object detection [113], [124], [125], object recognition [126]- [131], action recognition [132], [133], segmentation [37], [114], [115], [134], [135] and background subtraction [136]. Besides, specific applications include video summarization [137] and compression [138], scene understanding [139]- [141], computer-human interaction [98], [142]- [147], robotics [132], [148]- [150], and driver assistance [151], [152]. The potential of the visual attention models that are capable of extracting important regions promises their contributions to many other domains.…”
Section: Discussionmentioning
confidence: 99%
“…Using the dataset described above, visual saliency models are learned according to [Kubota et al 2012]. Unlike other learningbased models, their model takes into account non-uniformity of sensitivity within a field of view by using different weights depend- ing on the distance from a current fixation position.…”
Section: Saliency Modelmentioning
confidence: 99%
“…Zhao et al [2011] proposed a model trained with features used in Itti et al's model [1998] by the least squares method. Kubota et al [2012] improved the learning-based saliency model by incorporating characteristics of the human visual field.…”
Section: Introductionmentioning
confidence: 99%