2011 International Conference on Computer Vision 2011
DOI: 10.1109/iccv.2011.6126343
|View full text |Cite
|
Sign up to set email alerts
|

Semantic contours from inverse detectors

Abstract: We study the challenging problem of localizing and classifying category-specific object contours in real world images. For this purpose, we present a simple yet effective method for combining generic object detectors with bottomup contours to identify object contours. We also provide a principled way of combining information from different part detectors and across categories. In order to study the problem and evaluate quantitatively our approach, we present a dataset of semantic exterior boundaries on more th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
756
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 1,444 publications
(758 citation statements)
references
References 20 publications
2
756
0
Order By: Relevance
“…We train our model with the train set (1,464 images) and the extra annotations provided by [26] (resulting in an augmented set of 10,582 images), and test it on the validation set (1,449 images). The performance is measured in terms of pixel Intersection-over-Union (IoU) averaged across the 21 categories.…”
Section: Weakly Supervised Segmentationmentioning
confidence: 99%
“…We train our model with the train set (1,464 images) and the extra annotations provided by [26] (resulting in an augmented set of 10,582 images), and test it on the validation set (1,449 images). The performance is measured in terms of pixel Intersection-over-Union (IoU) averaged across the 21 categories.…”
Section: Weakly Supervised Segmentationmentioning
confidence: 99%
“…The majority of papers on edge detection have focused on using only low-level cues, such as pixel intensity or color [1][2][3][4][5]. Recent work has started exploring the problem of boundary detection based on higher-level representations of the image, such as motion, surface and depth cues [6][7][8], segmentation [9], as well as category specific information [10,11].…”
Section: Introductionmentioning
confidence: 99%
“…Taniai Benchmark [45] We first evaluated our FCSS descriptor on the Taniai benchmark [45], which consists of 400 image pairs divided into three groups: FG3DCar [29], JODS [37], and PASCAL [20]. As in [45], flow accuracy was measured by computing the proportion of foreground Figure 6.…”
Section: Resultsmentioning
confidence: 99%