2020 International Conference on 3D Vision (3DV) 2020
DOI: 10.1109/3dv50981.2020.00031
|View full text |Cite
|
Sign up to set email alerts
|

Global Context Aware Convolutions for 3D Point Cloud Understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 31 publications
(27 citation statements)
references
References 21 publications
0
27
0
Order By: Relevance
“…One way is use layers that produce rotationally invariant features, which can then be processed further without restrictions, for example by aligning the inputs to the convolutional filters. This approach is taken by GCANet [37], and by MA-KPConv [28], which extends the KPConv model by using multiple alignments for the filters. Another approach is to apply an invariant map as the final function of each layer, as in Spherical Harmonics Networks (SPHnet) [22], which calculate activations in a spherical harmonic basis and produce invariant output by taking the norm over coefficients with identical degree.…”
Section: Related Workmentioning
confidence: 99%
“…One way is use layers that produce rotationally invariant features, which can then be processed further without restrictions, for example by aligning the inputs to the convolutional filters. This approach is taken by GCANet [37], and by MA-KPConv [28], which extends the KPConv model by using multiple alignments for the filters. Another approach is to apply an invariant map as the final function of each layer, as in Spherical Harmonics Networks (SPHnet) [22], which calculate activations in a spherical harmonic basis and produce invariant output by taking the norm over coefficients with identical degree.…”
Section: Related Workmentioning
confidence: 99%
“…Particularly, we propose an effective and lightweight approach to perform rotation-invariant convolution for point clouds, which is an extension of our previous work (Zhang et al 2019a(Zhang et al , 2020. We propose to make rotation-invariant features more informative by local reference axis (LRA), and consider point-point relations, which improves feature distinction as well.…”
Section: Introductionmentioning
confidence: 97%
“…However, in 3D, such data augmentation becomes less effective due to the additional degrees of freedom, which can make training prohibitively expensive. Some previous works have been proposed to learn rotation-invariant features (Zhang et al 2020(Zhang et al , 2019aRao et al 2019;Poulenard et al 2019;Deng et al 2018;Chen et al 2019), which leads to consistent predictions given arbitrarily rotated point clouds. We observe that state-of-the-art methods can improve the feature learning by using local reference frame (LRF) to encode both local and global information (Zhang et al 2020;Kim et al 2020b;Thomas 2020).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations