2019 International Conference on 3D Vision (3DV) 2019
DOI: 10.1109/3dv.2019.00073
|View full text |Cite
|
Sign up to set email alerts
|

On Object Symmetries and 6D Pose Estimation from Images

Abstract: Objects with symmetries are common in our daily life and in industrial contexts, but are often ignored in the recent literature on 6D pose estimation from images. In this paper, we study in an analytical way the link between the symmetries of a 3D object and its appearance in images. We explain why symmetrical objects can be a challenge when training machine learning algorithms that aim at estimating their 6D pose from images. We propose an efficient and simple solution that relies on the normalization of the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(35 citation statements)
references
References 25 publications
0
35
0
Order By: Relevance
“…Rad and Lepetit [52] assume that the global object symmetries are known and propose a pose normalization applicable to the case when the projection of the axis of symmetry is close to vertical. Pitteri et al [51] introduce a pose normalization that is not limited to this special case. Kehl et al [33] train a classifier for only a subset of viewpoints defined by global object symmetries.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Rad and Lepetit [52] assume that the global object symmetries are known and propose a pose normalization applicable to the case when the projection of the axis of symmetry is close to vertical. Pitteri et al [51] introduce a pose normalization that is not limited to this special case. Kehl et al [33] train a classifier for only a subset of viewpoints defined by global object symmetries.…”
Section: Related Workmentioning
confidence: 99%
“…The network is trained on several types of synthetic images. For T-LESS, we use 30K physicallybased rendered (PBR) images from SyntheT-LESS [51], 50K images of objects rendered with OpenGL on random photographs from NYU Depth V2 [57] (similarly to [22]), and 38K real images from [24] showing objects on black background, where we replaced the background with random photographs. For YCB-V, we use the provided 113K real and 80K synthetic images.…”
Section: Datasets the Experiments Are Conducted On Three Datasets: T-lessmentioning
confidence: 99%
“…Without loss of generality, we assume that the green vector is along the symmetry axis; then, we set λ r as zero to handle the circular symmetry objects. For other types of symmetric objects, we can employ the rotation mapping function used in [24,34] to map the relevant rotation matrices to a unique one.…”
Section: Decoupled Rotation Estimationmentioning
confidence: 99%
“…The box shape was chosen to demonstrate the strength of our method in handling symmetric objects naturally. Inferring the pose of a symmetric object represents a particularly challenging problem, as documented in literature [27], [28], [29]. This is because the likelihood to be estimated is multi-modal, something that direct methods for object pose inference do not explicitly handle.…”
Section: A Symmetric Object With Occlusionsmentioning
confidence: 99%
“…This is because the likelihood to be estimated is multi-modal, something that direct methods for object pose inference do not explicitly handle. In fact, such methods either infer one possible rotation arbitrarily [3] or require symmetry labeled training data [27]. Objects of other shapes can easily be considered by including their meshes into the synthetic data loader.…”
Section: A Symmetric Object With Occlusionsmentioning
confidence: 99%