2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7298710
|View full text |Cite
|
Sign up to set email alerts
|

SALICON: Saliency in Context

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
565
0
1

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 595 publications
(568 citation statements)
references
References 27 publications
2
565
0
1
Order By: Relevance
“…This provided a more complete set of annotations than the regions extracted for the MTurk labeling tasks, where only the top 1-3 most highly-fixated regions per image were labeled. In this section we compute the importance of faces in an image following the approach of Jiang et al [18]: given a bounding box for an object in an image, the maximum saliency value falling within the object's outline is taken as the object's importance score (the maximum is a good choice for such analyses as it does not scale with object size).…”
Section: The Importance Of Peoplementioning
confidence: 99%
See 3 more Smart Citations
“…This provided a more complete set of annotations than the regions extracted for the MTurk labeling tasks, where only the top 1-3 most highly-fixated regions per image were labeled. In this section we compute the importance of faces in an image following the approach of Jiang et al [18]: given a bounding box for an object in an image, the maximum saliency value falling within the object's outline is taken as the object's importance score (the maximum is a good choice for such analyses as it does not scale with object size).…”
Section: The Importance Of Peoplementioning
confidence: 99%
“…DeepFix [19] and SALICON [18], both neural network models, hold the top 2 spots. The CAT2000 dataset, a recent addition to the MIT benchmark, has 19 models evaluated to date.…”
Section: Evaluating Progressmentioning
confidence: 99%
See 2 more Smart Citations
“…Note that the red boxes, which do not correspond to objects, let alone salient ones, all have higher scores than the green box, which does denote a salient object. Right the saliency map output by the saliency detection method of Jiang et al (2015), currently the highest ranking method on the MIT saliency benchmark (Bylinskii et al 2012). Note that the cooler is not highlighted as salient.…”
Section: Figmentioning
confidence: 99%