2013 IEEE Conference on Computer Vision and Pattern Recognition 2013
DOI: 10.1109/cvpr.2013.412
|View full text |Cite
|
Sign up to set email alerts
|

Robust Object Co-detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
23
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(23 citation statements)
references
References 25 publications
0
23
0
Order By: Relevance
“…To effectively utilize prior knowledge from a collection of labeled examples, we devise a structure pursuit model for semantic segmentation and labeling of 3D meshes based on low-rank representation (LRR), which is a powerful tool to recover the relationship between data [3][4][5]. Different from the meaning in computer graphics, the "structure" above means the underlying structure of the features.…”
Section: Introductionmentioning
confidence: 99%
“…To effectively utilize prior knowledge from a collection of labeled examples, we devise a structure pursuit model for semantic segmentation and labeling of 3D meshes based on low-rank representation (LRR), which is a powerful tool to recover the relationship between data [3][4][5]. Different from the meaning in computer graphics, the "structure" above means the underlying structure of the features.…”
Section: Introductionmentioning
confidence: 99%
“…By contrast, object co-detection [2] attempts to simultaneously exploit the similarity between pairs of objects to perform detection jointly in multiple images. Most existing co-detection methods rely on simple handcrafted features to model object similarity [2,8,20,11]. By contrast, in our previous work [10], we performed feature selection using pre-trained CNN features.…”
Section: Related Workmentioning
confidence: 99%
“…By contrast, in a parallel line of research, object co-detection [2,8,10,11] has emerged as an effective approach to leveraging the information jointly contained in multiple images to improve detection accuracy. Unfortunately, to model the similarity of multiple objects, existing methods rely on either handcrafted features [2,8,20,11], or features learned for object recognition [10]. As a consequence, they are ill-suited to handle general object proposals, whose appearance is subject to much larger variations than specific object classes.…”
Section: Introductionmentioning
confidence: 99%
“…However these tend to require that the object be in each image considered and are as such not appropriate for our noisy annotation task. A recent method [15] could provide an alternative method of achieving visual consistency amongst the candidates.…”
Section: Searching For Candidate Regionsmentioning
confidence: 99%