2022
DOI: 10.48550/arxiv.2201.12296
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Benchmarking Robustness of 3D Point Cloud Recognition Against Common Corruptions

Abstract: Deep neural networks on 3D point cloud data have been widely used in the real world, especially in safety-critical applications. However, their robustness against corruptions is less studied. In this paper, we present ModelNet40-C, the first comprehensive benchmark on 3D point cloud corruption robustness, consisting of 15 common and realistic corruptions. Our evaluation shows a significant gap between the performances on ModelNet40 and ModelNet40-C for state-of-the-art (SOTA) models. To reduce the gap, we prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(26 citation statements)
references
References 47 publications
0
26
0
Order By: Relevance
“…For instance, there is a lack of natural fog effects in these datasets, while fog could affect the reflection of laser beams and corrupt point cloud data with false reflections by droplets [9], [10]. Apart from the external scenarios, the internal noise of sensors can also increase the deviation and variance of ranging measurements [11] and result in corrupted data and detector performance degradation. Given that LiDAR-based point cloud detection is usually used in safety-critical applications (e.g., autonomous driving) and these external and internal corruptions could potentially affect detectors' robustness [12], [13], [11], it is critical to comprehensively evaluate an object detector under those corruptions before deploying it in real-world environments.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, there is a lack of natural fog effects in these datasets, while fog could affect the reflection of laser beams and corrupt point cloud data with false reflections by droplets [9], [10]. Apart from the external scenarios, the internal noise of sensors can also increase the deviation and variance of ranging measurements [11] and result in corrupted data and detector performance degradation. Given that LiDAR-based point cloud detection is usually used in safety-critical applications (e.g., autonomous driving) and these external and internal corruptions could potentially affect detectors' robustness [12], [13], [11], it is critical to comprehensively evaluate an object detector under those corruptions before deploying it in real-world environments.…”
Section: Introductionmentioning
confidence: 99%
“…Hence, there is an increasing demand for extending existing benchmarks to conduct a comprehensive evaluation through covering diverse corruptions in the real world. A straightforward way is to synthesize the corrupted point clouds given the success of similar solutions in the imagebased tasks [16], [17] and 3D object recognition [11], [18]. However, there is no accessible dataset for the robustness evaluation of point cloud detectors.…”
Section: Introductionmentioning
confidence: 99%
“…To demonstrate the ability to do test-time training for synthetic data to real data transfer we further use VisDA-C [21], which is a challenging large-scale synthetic-to-real object classification dataset, consisting of 12 classes, 152,397 synthetic training images and 55,388 real testing images. Finally, to evaluate test-time training on 3D point cloud data, we choose ModelNet40-C [26], which consists of 15 common and realistic corruptions of point cloud data, with 9,843 training samples and 2,468 test samples.…”
Section: Datasetsmentioning
confidence: 99%
“…5 of the main paper we discussed the possibility to improve the performance on the Synthetic to Real Benchmark by exploiting additional data augmentations. In particular, we analyzed two types of transformations called LIDAR and Occlusion which were presented in [42] and that allowed us to obtain a performance improvement mainly with the DGCNN backbone. The objective of this augmentation strategy is in fact to emulate on training samples some of the corruptions that appear in real-world data.…”
Section: A Baselines Details Implementation and Reproducibilitymentioning
confidence: 99%