2018
DOI: 10.48550/arxiv.1811.01571
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SPNet: Deep 3D Object Classification and Retrieval using Stereographic Projection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Previous researchers attempt to project a point cloud to 2D images along different directions and apply standard 2D CNN to extract features. The features learned from different images are aggregated to a global feature through a view-pooling layer, then this global feature can be utilized to classify objects [5,14,15]. Although the view based method can achieve high accuracy in the classification task, it is nontrivial to apply this method to segment point cloud, which classifies each point to a specific category.…”
Section: A View Based Methodsmentioning
confidence: 99%
“…Previous researchers attempt to project a point cloud to 2D images along different directions and apply standard 2D CNN to extract features. The features learned from different images are aggregated to a global feature through a view-pooling layer, then this global feature can be utilized to classify objects [5,14,15]. Although the view based method can achieve high accuracy in the classification task, it is nontrivial to apply this method to segment point cloud, which classifies each point to a specific category.…”
Section: A View Based Methodsmentioning
confidence: 99%
“…The advanced tasks are more specific tasks directly solving a practical real world problem. [3] 57.0 RoPS [4] 61.1 FPFH [29] 42.3 SI [26] 59.4 PCA [25] 45.8 SC [27] 61.5 SDF [28] 33.2 LMVCNN [8] 66.1 CGF [7] 65.6 Ours 68.1 SHOT [3] 48.5 RoPS [4] 51.6 FPFH [29] 40.8 SI [26] 55.4 PCA [25] 45.6 SC [27] 58.2 SDF [28] 28.3 Local Descriptors (Deep Learning) LMVCNN [8] 66.1 CGF [7] 65.5 Ours 69.5 Global Descriptors (Deep Learning) RotationNet [34] 98.46 SPNet [32] 97. SPNet [32] 94.2 PANORAMA-ENN [33] 93.28…”
Section: Applicationsmentioning
confidence: 99%
“…These techniques provide global descriptors. It can be seen that the performance of state-of-the-art is greater than 95%, however, such techniques are usually based on alternate representations of point clouds such as projections [32] or ensemble of networks [33] to achieve such results instead of directly processing point clouds.…”
Section: Classificationmentioning
confidence: 99%
“…Nevertheless, in the era of deep learning, this representation is o en bypassed because of its irregularity, which does not suit Convolutional Neural Networks (CNNs). Instead, 3D data is o en represented as volumetric grids (Ben-Shabat et al 2018;Maturana and Scherer 2015;Roynard et al 2018;Sedaghat et al 2016b) or multiple 2D projections (Boulch et al 2017;Feng et al 2018a;Kanezaki et al 2018;Su et al 2015;Yavartanoo et al 2018). In some recent works point clouds are utilized and new ways to convolve or pool are proposed (Atzmon et al 2018;Hua et al 2018;omas et al 2019;Xu et al 2018).…”
Section: Introductionmentioning
confidence: 99%