2013 IEEE Conference on Computer Vision and Pattern Recognition 2013
DOI: 10.1109/cvpr.2013.150
|View full text |Cite
|
Sign up to set email alerts
|

Saliency Aggregation: A Data-Driven Approach

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
80
0

Year Published

2014
2014
2016
2016

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 135 publications
(80 citation statements)
references
References 36 publications
0
80
0
Order By: Relevance
“…Consequently, the room for improvement is still important and can be obtained, to some degree, by aggregating different results. However, we draw attention to a crucial difference between our work and the two aforementioned studies [11,10]. The saliency maps that are aggregated in this study are computed using computational models of visual attention for eye fixation prediction.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Consequently, the room for improvement is still important and can be obtained, to some degree, by aggregating different results. However, we draw attention to a crucial difference between our work and the two aforementioned studies [11,10]. The saliency maps that are aggregated in this study are computed using computational models of visual attention for eye fixation prediction.…”
Section: Introductionmentioning
confidence: 99%
“…Borji et al [11] combined the results of several models and found out that the simple average method performs well. Mai et al [10] combined results of models detecting object-of-interest on simple images (mainly composed of one object-of-interest withsimple background). They use simple methods as well as the trained methods.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Second, to overcome the propensity of bottom-up saliency to respond more to edges than (homogeneous) object interiors, they include some form of spatial propagation of saliency information. This can be implemented various ways, including conditional random fields (CRFs) [30,31], random walk models [19], energy models [10,19], or diffusion processes [38]. Third, beyond the classical measures of bottom-up saliency, these models may also account for objectness features.…”
Section: Introductionmentioning
confidence: 99%
“…Top-down approaches [7]- [10] are goal-directed and usually adopt supervised learning with a specific class. Most of the saliency detection methods are based on bottomup visual attention mechanisms [11]- [15], [17], [18], [21], which are independent of the knowledge of the content in the image and utilize various low level features, such as intensity, color and orientation. Those bottom-up saliency models are generally based on different mathematical formulations of center-surround contrast or treat the image boundary as the background.…”
Section: Introductionmentioning
confidence: 99%