2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7298603
|View full text |Cite
|
Sign up to set email alerts
|

Traditional saliency reloaded: A good old model in new shape

Abstract: In this paper, we show that the seminal, biologicallyinspired saliency model by Itti et al. [21] is still competitive with current state-of-the-art methods for salient object segmentation if some important adaptions are made. We show which changes are necessary to achieve high performance, with special emphasis on the scale-space: we introduce a twin pyramid for computing Difference-of-Gaussians, which enables a flexible center-surround ratio. The resulting system, called VOCUS2, is elegant and coherent in st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
87
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 96 publications
(87 citation statements)
references
References 37 publications
0
87
0
Order By: Relevance
“…Saliency maps can be either purely bottom-up [61], [17], [25], or refined by top-down modulation [23], [62], [18], [20]. Bottom-up saliency highlights stimuli that are intrinsically salient in their context, which may sometimes be sufficient for scene exploration [64].…”
Section: A Visual Attention and Visual Saliencymentioning
confidence: 99%
See 1 more Smart Citation
“…Saliency maps can be either purely bottom-up [61], [17], [25], or refined by top-down modulation [23], [62], [18], [20]. Bottom-up saliency highlights stimuli that are intrinsically salient in their context, which may sometimes be sufficient for scene exploration [64].…”
Section: A Visual Attention and Visual Saliencymentioning
confidence: 99%
“…However, top-down modulation, which highlights elements that are relevant for a specific task, is more meaningful for the problem of object detection in indoor environments. Saliency maps are either fixation-based [30], [17] or area-based [11], [20], [61]. Fixation-based approach is related with the probability of a human being to make a fixation at a given pixel position, while area-based approach consider salient elements (typically objects) as a whole area of the image.…”
Section: A Visual Attention and Visual Saliencymentioning
confidence: 99%
“…The famous ITTI salient detection model (Itti and Koch, 2000) integrates gray feature, direction and color information to construct basic physiological visual similar features. However, the results with the orientation channel usually turns out to be less useful for salient object segmentation (Frintrop et al, 2015) since it assigns high saliency values to object edges, and makes object segmentation difficult. Thus we extract the same saliency features: color and intensity ones with (Frintrop et al, 2015) to generate the salient map.…”
Section: Simple Feature Extractionmentioning
confidence: 99%
“…However, the results with the orientation channel usually turns out to be less useful for salient object segmentation (Frintrop et al, 2015) since it assigns high saliency values to object edges, and makes object segmentation difficult. Thus we extract the same saliency features: color and intensity ones with (Frintrop et al, 2015) to generate the salient map. We extract color features with the intensity and newly defined RG and BY color channels, which are calculated as I = (R + G + B)/3, RG = R − G and BY = B − (R + G)/2, respectively.…”
Section: Simple Feature Extractionmentioning
confidence: 99%
“…The input is represented as feature maps consisting of colour channels and responses of oriented Gabor filters. Recently, this approach has been modified to detect larger salient regions instead of points by Frintrop et al [5], showing the continued appeal of the approach. Other modifications include weighting the different feature maps after identifying useful features [11] and exploring the role of saliency in overt attention [19].…”
Section: Introductionmentioning
confidence: 99%