2013 14th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS) 2013
DOI: 10.1109/wiamis.2013.6616119
|View full text |Cite
|
Sign up to set email alerts
|

Superpixel-based saliency detection

Abstract: In this paper, we propose an effective superpixel-based saliency model. First, the original image is simplified by performing superpixel segmentation and adaptive color quantization. On the basis of superpixel representation, inter-superpixel similarity measures are then calculated based on difference of histograms and spatial distance between each pair of superpixels. For each superpixel, its global contrast measure and spatial sparsity measure are evaluated, and refined with the integration of intersuperpixe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
15
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(15 citation statements)
references
References 18 publications
0
15
0
Order By: Relevance
“…In [22], superpixel segmentation is used to compress images more efficiently than traditional techniques. Liu et al proposed to calculate the inter-superpixel similarity, global contrast, and spatial sparsity to generate a superpixel-level saliency map [23]. Superpixels are also applicable to image decomposition [24], multisensory video fusion [25], and image synthesis [26].…”
Section: A Superpixel Versus Image Patchmentioning
confidence: 99%
See 1 more Smart Citation
“…In [22], superpixel segmentation is used to compress images more efficiently than traditional techniques. Liu et al proposed to calculate the inter-superpixel similarity, global contrast, and spatial sparsity to generate a superpixel-level saliency map [23]. Superpixels are also applicable to image decomposition [24], multisensory video fusion [25], and image synthesis [26].…”
Section: A Superpixel Versus Image Patchmentioning
confidence: 99%
“…It is necessary to segment images into a reasonable number of superpixels. A number of superpixels larger than 200 is generally sufficient for edge preservation [23]. Excessive superpixels would lead to a high computational cost.…”
Section: B Superpixel Segmentation Of Reference and Distorted Imagesmentioning
confidence: 99%
“…Visual saliency is the perceptual quality that makes some objects in the scene stand out from their surrounding regions and thus capture human visual attention [3]. We, as humans, are expert at quickly and accurately identifying the most visually noticeable foreground object in the scene, known as salient object, and adaptively focus our attention on such perceived important regions [1].…”
Section: Introductionmentioning
confidence: 99%
“…Statistical models such as Gaussian model [40] and kernel density estimation based nonparametric model [41] are used to represent each region, and both color and spatial saliency measures of such statistical models are evaluated and integrated to measure the pixel's saliency. Using different formulations, global contrast and spatially weighted regional contrast [42], color compactness of over-segmented regions [43], distinctiveness and compactness of regional histograms [44], global contrast and spatial sparsity of superpixels [45], and two contrast measures for rating global uniqueness and spatial distribution of colors in the saliency filter [46] are exploited to generate saliency maps with well-defined boundaries. In the recently proposed hierarchical saliency model [47], saliency cues are calculated on three image layers with different scales of segmented regions, and then hierarchical inference is exploited to fuse them into a single saliency map.…”
mentioning
confidence: 99%
“…The proposed model is considerably different from [47] in the complete framework of saliency tree generation and analysis, which selects the most suitable region representation by exploiting the hierarchy of tree structure, to effectively improve the saliency detection performance. Second, on the basis of our previous work [45], [51], we integrate three measures, i.e., global contrast, spatial sparsity and object prior, at region level to reasonably initialize regional saliency measures. Third, we propose a saliency-directed region merging approach with dynamic scale control scheme for saliency tree generation, which can preserve meaningful regions at different scales.…”
mentioning
confidence: 99%