2013
DOI: 10.1007/s11263-013-0627-y
|View full text |Cite
|
Sign up to set email alerts
|

Rotational Projection Statistics for 3D Local Surface Description and Object Recognition

Abstract: Recognizing 3D objects in the presence of noise, varying mesh resolution, occlusion and clutter is a very challenging task. This paper presents a novel method named Rotational Projection Statistics (RoPS). It has three major modules: Local Reference Frame (LRF) definition, RoPS feature description and 3D object recognition. We propose a novel technique to define the LRF by calculating the scatter matrix of all points lying on the local surface. RoPS feature descriptors are obtained by rotationally projecting t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
506
0
2

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 595 publications
(534 citation statements)
references
References 61 publications
(151 reference statements)
0
506
0
2
Order By: Relevance
“…Other than the descriptors used so far, RoPS [5] require a triangle mesh to work with. This mesh is generated following the approach for fast triangulation of unordered point clouds described in [17].…”
Section: Rotational Projection Statistics (Rops)mentioning
confidence: 99%
See 1 more Smart Citation
“…Other than the descriptors used so far, RoPS [5] require a triangle mesh to work with. This mesh is generated following the approach for fast triangulation of unordered point clouds described in [17].…”
Section: Rotational Projection Statistics (Rops)mentioning
confidence: 99%
“…For object recognition, the Rotational Projection Statistics (RoPS) descriptor was reported as the best choice in [5], but in other studies, the Point-Feature Histograms (PFH) delivered good results as well [6] [7]. Regarding the matching of point clouds, recent publications rank the RoPS descriptor on top [8] [9], with [10] and [11] additionally reporting good results for the SHOT and Fast-Point-Feature Histogram (FPFH) descriptor, respectively.…”
Section: Introductionmentioning
confidence: 99%
“…Each scene was synthetically built by randomly rotating and translating three to five models to create clutter and pose variances. In consequence, the ground truth rotations and translations between each model and its instances in the scene were known as a priori during the process of construction 7 .…”
Section: Dataset and Parameter Settingmentioning
confidence: 99%
“…The global feature based algorithms achieved a good performance in shape retrieval applications. However, these algorithms need complete 3D models so that they cannot work well to deal with occlusion and clutter 7 . In contrast, the local feature based methods generate a set of features by encoding the properties of the local patch of key points which make them more robust to occlusion and clutter.…”
Section: Introductionmentioning
confidence: 99%
“…1,2 This technique has been used in numerous applications including automation, manipulation and grasping, robot localization and navigation, surgery and education. [3][4][5] Given a database of 3-D models and a range image, the aim of object recognition is to identify the set of visible models and find the 3-D rigid transformations (i.e., rotations and translations) that can transform the visible models into the scene to superimpose the relative areas well. 6 However, to correctly recognize all the visible models in scenes with different levels of noise, varying mesh resolution, occlusion, and clutter is still a challenging work.…”
Section: Introductionmentioning
confidence: 99%