2011 International Conference on Computer Vision 2011
DOI: 10.1109/iccv.2011.6126342
|View full text |Cite
|
Sign up to set email alerts
|

From contours to 3D object detection and pose estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
97
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 115 publications
(98 citation statements)
references
References 14 publications
1
97
0
Order By: Relevance
“…To overcome these limitations, researchers tried to learn the object appearance from 3D models [7,8,10]. The approach of Stark et al [7] relies only on 3D CAD models of cars and Liebelt and Schmid [8] combine geometric shape and pose priors with natural images.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…To overcome these limitations, researchers tried to learn the object appearance from 3D models [7,8,10]. The approach of Stark et al [7] relies only on 3D CAD models of cars and Liebelt and Schmid [8] combine geometric shape and pose priors with natural images.…”
Section: Related Workmentioning
confidence: 99%
“…Both of these approaches work well and also generalize to object classes, but they are not real-time capable, require expensive training and cannot handle clutter and occlusions well. In [10] authors use a number of viewpoint-specific shape representations to model the object category. They rely on contours and introduce a novel feature called BOB (bag of boundaries), which at a given point in the image is a histogram of boundaries from image contours in training images.…”
Section: Related Workmentioning
confidence: 99%
“…Common approaches to deal with the problem of detecting shapes address these issues by relying on shape information estimated upon gray-scale images, i.e., by extracting contours and local orientations based on the local gradients of intensity images rather than using binary images [12]. For instance, using a networks of local segment as descriptors, and performing detection of shapes belonging to classes that are relatively easy to differentiate in visual terms [13].…”
Section: Related Workmentioning
confidence: 99%
“…Testing is performed on scenes ("Table-Top-Local") containing one or several instances of the objects in a cluttered office environment; note that those experimental conditions are more challenging than existing evaluations (e.g. in [56]) since those two parts of the dataset feature different imaging and lighting conditions. We perform detection in the test images of each object category separately, and we measure the detection rates with the standard criterion of 50% bounding box overlap.…”
Section: Tabletop Dataset: Multiview Model Detection In Cluttermentioning
confidence: 99%