2017 IEEE International Conference on Multimedia and Expo (ICME) 2017
DOI: 10.1109/icme.2017.8019413
|View full text |Cite
|
Sign up to set email alerts
|

Segmentation guided local proposal fusion for co-saliency detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 21 publications
0
10
0
Order By: Relevance
“…See the second and third columns of cows on MSRC v2 dataset. The backgrounds contain grass, which affect the detection of some compared methods such as CSHS [17], CoDW [16], SACS [27] and SGCS [30]. Comparatively, our method can better separate common objects and backgrounds with clearer boundaries.…”
Section: E Comparison With Baselinesmentioning
confidence: 99%
See 2 more Smart Citations
“…See the second and third columns of cows on MSRC v2 dataset. The backgrounds contain grass, which affect the detection of some compared methods such as CSHS [17], CoDW [16], SACS [27] and SGCS [30]. Comparatively, our method can better separate common objects and backgrounds with clearer boundaries.…”
Section: E Comparison With Baselinesmentioning
confidence: 99%
“…We compare our method with representative approaches including CBCS [13], CBCS-s [13], CoDW [16], CSCO [31], CSHS [17], DIM [21], ESMG [18], GwD [35], IRSD [28], IRSD-s [28], SP-MIL [34], UMLBF [50], SACS [27], SACSs [27], SGCS [30], UCSG [83] and Gw-FCN [36]. Among them, CBCS [13], CoDW [16], CSCO [31], CSHS [17], ESMG [18], IRSD [28], SP-MIL [34], SACS [27], SGCS [30] and UCSG [83] are unsupervised co-saliency methods; DIM [21], GwD [35], UMLBF [50] and Gw-FCN [36] are supervised co-saliency methods; to investigate the influence of interaction information, CBCS-s [13], IRSD-s [28], SACSs [27], which are aimed at single image saliency detection, are also used as baselines in our work. When available, we use the publicly released source code with default parameters provided by the authors to reproduce the experiments on our test sets.…”
Section: Compared Baselinesmentioning
confidence: 99%
See 1 more Smart Citation
“…Bottom-up object saliency detection Bottom-up approaches [4,5] find objects attracting humans. Different hypothesis or priors [4,5] are used to distinguish salient objects from background, such as global/local contrast [8,21], focusness [22], objectness [7,20,22], cosegment [40] or video motion [19]. These approaches sometimes fail because the hypothesis or priors vary from object category to object category.…”
Section: Related Workmentioning
confidence: 99%
“…In other words, in addition to the saliency attribute in an individual image, the repetitiveness constraint across the whole image group is also crucial to suppress the background and noncommon salient regions. In existing methods, the inter-image correspondence is simulated as a matching process [43]- [48], clustering process [49]- [51], low-rank problem [52], [53], propagation process [54]- [56], or learning process [57]- [61]. However, the matching-and propagation-based methods are often time consuming, while the clustering based methods are sensitive to the noise.…”
Section: Introductionmentioning
confidence: 99%