2018 IEEE International Conference on Robotics and Biomimetics (ROBIO) 2018
DOI: 10.1109/robio.2018.8665097
|View full text |Cite
|
Sign up to set email alerts
|

2D Object Localization Based Point Pair Feature for Pose Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

4
2

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 17 publications
0
12
0
Order By: Relevance
“…Hinterstoisser et al [25] introduced a better and more efficient sampling strategy with modifications to the pre and post-processing steps, which achieved good results. Liu et al obtained an impressive result by combining a machine learning-based 2D object localization and a 3D pose estimation method in [26], [27]. Vidal et al proposed a better preprocessing and clustering step with an improved matching method for Point Pair Features in [29].…”
Section: A Feature-based Methodsmentioning
confidence: 99%
“…Hinterstoisser et al [25] introduced a better and more efficient sampling strategy with modifications to the pre and post-processing steps, which achieved good results. Liu et al obtained an impressive result by combining a machine learning-based 2D object localization and a 3D pose estimation method in [26], [27]. Vidal et al proposed a better preprocessing and clustering step with an improved matching method for Point Pair Features in [29].…”
Section: A Feature-based Methodsmentioning
confidence: 99%
“…PBVS is a method of positioning a robot through the minimization of the difference between target and current poses of the robot which is estimated from captured images. PBVS has been attracting attention due to the recent price reduction and spread of 3D sensors, and the progress of 3D measurement [15], [16] and pose estimation [17], [18], [19] technology. However, PBVS requires an intrinsic parameter, which results in the vulnerability to the errors of the camera parameters.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Liu et al [11] locate the object in the image using convolutional neural networks (CNN) [14]- [17], and then obtain the corresponding point cloud through bounding box or mask [9]- [11]. Li et al [1] and Li and Hashimoto [13] simply divided the point cloud into many regions of interest (ROI).…”
Section: Introductionmentioning
confidence: 99%