2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
DOI: 10.1109/cvpr.2016.64
|View full text |Cite
|
Sign up to set email alerts
|

GraB: Visual Saliency via Novel Graph Model and Background Priors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
51
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 90 publications
(51 citation statements)
references
References 28 publications
0
51
0
Order By: Relevance
“…Li [22] used a visual saliency model and discovered that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. Recently, Wang [23] proposed an unsupervised bottom-up saliency detection model by exploiting novel graph structure and background priors.…”
Section: Visual Saliency Modelmentioning
confidence: 99%
“…Li [22] used a visual saliency model and discovered that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. Recently, Wang [23] proposed an unsupervised bottom-up saliency detection model by exploiting novel graph structure and background priors.…”
Section: Visual Saliency Modelmentioning
confidence: 99%
“…Instead of predicting a few fixation points in an image, new saliency detection methods uniformly highlight the entire salient region in the foreground (Achanta and Hemami, 2009, ;Cheng and Mitra, 2015, ;Gong and Tao, 2015, ;Wang et al, 2016, ;Chakraborty and Mitra, 2016, ;Liu and Han, 2016, ;Kim et al, 2016, ;Wei and Wen, 2012). For example, Achanta et al (Achanta and Hemami, (2009) first presented a frequency-tuned based salient region detection method that outputted full resolution saliency maps with well-defined boundaries of salient objects by substantially retaining more spatial frequency contents from the original image.…”
Section: Related Workmentioning
confidence: 99%
“…The CNN-based models have achieved better performance than the handcrafted-saliency models in a variety of challenging cases; however, a sufficient training dataset, a high-quality GPU, and considerable time are required for the learning part, and a failure-cause analysis is very difficult. 19 Most saliency approaches 12,15,16,17,18,10 were designed to employ contrast value as a main feature. The contrastbased saliency models consist of the following two types: global-and local-contrast-based models.…”
Section: Related Workmentioning
confidence: 99%
“…Since the estimated saliency is a higher level feature map, the model can be used for various imageprocessing and pattern-recognition applications, such as visual tracking, 2 object segmentation, 3,4 object recognition, 5,6 image matching, 7 and image/video compression. 8,9,10,11 Although the study of saliency region detection is quite extensive and diverse, a common feature among most existing studies 12,13,14,15,16,17 is that the models have been dependent on the contrast feature. Because the contrast feature reflects the human-visual system that automatically concentrates on uniqueness and rarity, 1 it has been widely used for the detection of the salient region.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation