2018
DOI: 10.1109/tip.2017.2781424
|View full text |Cite
|
Sign up to set email alerts
|

Robust Object Co-Segmentation Using Background Prior

Abstract: Given a set of images that contain objects from a common category, object co-segmentation aims at automatically discovering and segmenting such common objects from each image. During the past few years, object co-segmentation has received great attention in the computer vision community. However, the existing approaches are usually designed with misleading assumptions, unscalable priors, or subjective computational models, which do not have sufficient robustness for dealing with complex and unconstrained real-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
45
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 173 publications
(45 citation statements)
references
References 58 publications
0
45
0
Order By: Relevance
“…Table 1 shows the detailed results. We can see that our AGNN outperforms the best reported results (i.e., AGS [69]) on DAVIS 16 benchmark by a significant margin in terms of mean J (80.7 vs 79.7) and F (79.1 vs 77.4). Compared to PDB [55], which uses the same training protocol and training datasets, our AGNN yields significant performance gains of 3.5% and 4.6% in terms of mean J and mean F, respectively.…”
Section: Quantitative Performancementioning
confidence: 72%
See 4 more Smart Citations
“…Table 1 shows the detailed results. We can see that our AGNN outperforms the best reported results (i.e., AGS [69]) on DAVIS 16 benchmark by a significant margin in terms of mean J (80.7 vs 79.7) and F (79.1 vs 77.4). Compared to PDB [55], which uses the same training protocol and training datasets, our AGNN yields significant performance gains of 3.5% and 4.6% in terms of mean J and mean F, respectively.…”
Section: Quantitative Performancementioning
confidence: 72%
“…Implementation Details: Following [44,55], both static data from image salient object segmentation datasets, MSRA10K [8], DUT [72], and video data from the training set of DAVIS 16 are iteratively used to train our model. In a 'static-image' iteration, we randomly sample 6 images from the static training data to train our backbone network (DeepLabV3) to extract more discriminative foreground features.…”
Section: Experimental Setup Datasets and Metricsmentioning
confidence: 99%
See 3 more Smart Citations