2011
DOI: 10.1016/j.mechatronics.2011.03.008
|View full text |Cite
|
Sign up to set email alerts
|

Interoperable vision component for object detection and 3D pose estimation for modularized robot control

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 7 publications
0
11
0
Order By: Relevance
“…Evaluation results show that the proposed algorithm performs fairly well in the majority of the sceneries, although its performance degrades when it is required to detect transparent plastic objects. Our method exceeds the state of the art on Mae et al [8] for object detection, obtaining better results for objects that are small and have round surfaces, especially when the distance between the object and robot is larger than 1.2 m. In the future work, we propose to increase the database of objects and improve our object detection system using 3D information.…”
Section: Resultsmentioning
confidence: 87%
See 3 more Smart Citations
“…Evaluation results show that the proposed algorithm performs fairly well in the majority of the sceneries, although its performance degrades when it is required to detect transparent plastic objects. Our method exceeds the state of the art on Mae et al [8] for object detection, obtaining better results for objects that are small and have round surfaces, especially when the distance between the object and robot is larger than 1.2 m. In the future work, we propose to increase the database of objects and improve our object detection system using 3D information.…”
Section: Resultsmentioning
confidence: 87%
“…The proposed method differs from the method proposed by Mae et al [8] in three main respects: 1) We use for feature extraction the SURF [28] algorithm, while the Mae et al employed the Scale-invariant feature transform (SIFT) [34][35][36] for this task; 2) To find the best match for each feature we use the LSH [29,30], meanwhile Hough transform [35] was used by [8]; and 3) We use 10 different small objects as carton bottle, plastic bottle, and circular objects at a distance of 1.5 meters, while in the experiments of [8], they used six small static objects. Figure 10 shows the results obtained using the Mae et al method [8].…”
Section: Comparison With the Y Mae Et Al [8]mentioning
confidence: 79%
See 2 more Smart Citations
“…The basic idea of appearance based method is to extract feature and find feature correspondence from reference frame and current frame, and then to estimate the pose change between these two frames. In [5], [6], the authors exploit SIFT feature and give a closed-form solution for pose estimation. Unfortunately, the pose in reference frame is inaccurate or unknown in most practical applications.…”
Section: Introductionmentioning
confidence: 99%