2015 IEEE International Conference on Robotics and Automation (ICRA) 2015
DOI: 10.1109/icra.2015.7139366
|View full text |Cite
|
Sign up to set email alerts
|

Grasping surfaces of revolution: Simultaneous pose and shape recovery from two views

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…Solving for symmetry correspondence has been tried for surfaces of revolution, which are characterized by rotational symmetry [21][22][23] as well as for mirror-symmetrical polyhedral objects, where edge features are compared with respect to 2-D affine similarities (Refs. [24][25][26].…”
Section: Related Researchmentioning
confidence: 99%
“…Solving for symmetry correspondence has been tried for surfaces of revolution, which are characterized by rotational symmetry [21][22][23] as well as for mirror-symmetrical polyhedral objects, where edge features are compared with respect to 2-D affine similarities (Refs. [24][25][26].…”
Section: Related Researchmentioning
confidence: 99%
“…Closest to this work are approaches on SOR reconstruction and pose estimation, using two views and manually segmented contours [23], or automatically segmenting contours in a single view before applying reconstruction [3]. The goal of this work is to jointly segment and reconstruct the object in an effort to achieve more robustness.…”
Section: Related Workmentioning
confidence: 99%
“…The apparent contour was automatically annotated by rendering the contours of the object's 3D model in the image. The 3D model of each object was obtained by spray-painting it, manually segmenting its apparent contour in multiple views and applying the reconstruction process described in [23], which is exact when the object pose in the camera frame is known.…”
Section: ) Annotated Dataset For Transparent Edge Detectionmentioning
confidence: 99%
See 2 more Smart Citations