2020
DOI: 10.1007/978-3-030-58607-2_13
|View full text |Cite
|
Sign up to set email alerts
|

Deep Positional and Relational Feature Learning for Rotation-Invariant Point Cloud Analysis

Abstract: In this paper we propose a rotation-invariant deep network for point clouds analysis. Point-based deep networks are commonly designed to recognize roughly aligned 3D shapes based on point coordinates, but suffer from performance drops with shape rotations. Some geometric features, e.g., distances and angles of points as inputs of network, are rotation-invariant but lose positional information of points. In this work, we propose a novel deep network for point clouds by incorporating positional information of po… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(17 citation statements)
references
References 33 publications
0
17
0
Order By: Relevance
“…For the former, we adopt Point-Net [40] and DGCNN [54], which are widely adopted neural networks for geometric data. For rotation-invariant models, we adopt RIConv [63], ClusterNet [8], PR-invNet [61], and RI-GCN [22], which are state-of-the-art rotation-invariant models. Notice that permutation-equivariant models cannot be simply applied here and thus are not compared, as in all previous works.…”
Section: Point Cloud Analysismentioning
confidence: 99%
“…For the former, we adopt Point-Net [40] and DGCNN [54], which are widely adopted neural networks for geometric data. For rotation-invariant models, we adopt RIConv [63], ClusterNet [8], PR-invNet [61], and RI-GCN [22], which are state-of-the-art rotation-invariant models. Notice that permutation-equivariant models cannot be simply applied here and thus are not compared, as in all previous works.…”
Section: Point Cloud Analysismentioning
confidence: 99%
“…Another direction to enhance rotation-robustness is to learn rotation-invariant representations in networks. To achieve this, the rotation-invariant networks are derived by using kernelized convolutions [25,29], PCA normalization [9,10,49] and rotationinvariant position encoding [3,13,50].…”
Section: Related Workmentioning
confidence: 99%
“…Even worse, training with augmented data also introduces adversary effect that hurts the inference performance when test data is not rotated. Many works [3,9,10,13,25,29,49,50] attempted to address this issue through constructing rotation-invariant frameworks and features. Nevertheless, it is shown that rotation invariant features can still suffer evident performance drop when test data is inherently not rotated.…”
Section: Introductionmentioning
confidence: 99%
“…Other works encode local neighborhoods using some local or global coordinate system to achieve invariance to rotations and translations. [17,57,60] use PCA to define rotation invariance. Equivariance is a desirable property for autoencoders.…”
Section: Related Workmentioning
confidence: 99%