2011 International Conference on Computer Vision 2011
DOI: 10.1109/iccv.2011.6126333
|View full text |Cite
|
Sign up to set email alerts
|

Fusing generic objectness and visual saliency for salient object detection

Abstract: We

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 124 publications
(26 citation statements)
references
References 20 publications
0
26
0
Order By: Relevance
“…Next, we describe the datasets shortly and report both quantitative and qualitative comparisons of our approach with state-of-the-art approaches in detail. To save space, we compare our method with several prior ones, including SVO [39], PCAS [40], RC [12] and DRFI [11], which are the top four models or their improvements in survey [2]. In addition, we also consider well-known methods, such as CA [32], FT [30], HS [10], LRMR [33] and MR [15], that are not covered in [2].…”
Section: Experimental Results and Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…Next, we describe the datasets shortly and report both quantitative and qualitative comparisons of our approach with state-of-the-art approaches in detail. To save space, we compare our method with several prior ones, including SVO [39], PCAS [40], RC [12] and DRFI [11], which are the top four models or their improvements in survey [2]. In addition, we also consider well-known methods, such as CA [32], FT [30], HS [10], LRMR [33] and MR [15], that are not covered in [2].…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…5, our method produces consistently the lowest error on MSRA-B, SED and iCoSeg datasets, indicating more robustness against different datasets. Despite good performance in precision-recall curves and F-measure, LRMR [33], CA [32], FT [30] and SVO [39] have the higher MAE due to the weak background suppression.…”
Section: Quantitative Comparisonmentioning
confidence: 95%
See 1 more Smart Citation
“…Where S is is the saliency map, G is the ground truth. The performance of the proposed method is compared with 9 state-of-art methods, including DRFI [54], HS [10], PCA [55], RC [36], SVO [56], DSR [39], LEGS [57], DS [28], MDF [27], DCL [45], RFCN [44] and DSS [43]. Note that LEGS, DS, MDF, DCL, RFCN and DSS are methods based on deep learning.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…To mimic this property, the saliency detection algorithms are designed to identify the most informative regions of an image by using the priors and characteristics of the HVS. [17][18][19][20][21][22][23][24][25][26] Because this saliency information is useful for many applications of computer vision as stated in the introduction, many researchers proposed various kinds of prior to detect salient region such as center prior, local and global contrast prior, background characteristics, boundary prior, and so on. In this section, we introduce just a few methods among numerous works, which are quite fast and easily exploited for our saliency-based backlight control application: Yang et al 22 adopted a semi-supervised learning scheme, but their algorithm is based on simple node ranking considering whether a node belongs to either background or saliency.…”
Section: Saliency Detection Algorithmmentioning
confidence: 99%