2021 International Conference on 3D Vision (3DV) 2021
DOI: 10.1109/3dv53792.2021.00127
|View full text |Cite
|
Sign up to set email alerts
|

Geometric Adversarial Attacks and Defenses on 3D Point Clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…The outlier loss and the uniform loss encourage the generator to preserve the point cloud shape. Besides these GAN-based attacks, Lang et al [93] proposed an attack that alters the reconstructed geometry of a 3D point cloud using an autoencoder trained on semantic shape classes, while Mariani et al [94] proposed a method for creating adversarial attacks on surfaces embedded in 3D space, under weak smoothness assumptions on the perceptibility of the attack.…”
Section: ) Generative Strategiesmentioning
confidence: 99%
“…The outlier loss and the uniform loss encourage the generator to preserve the point cloud shape. Besides these GAN-based attacks, Lang et al [93] proposed an attack that alters the reconstructed geometry of a 3D point cloud using an autoencoder trained on semantic shape classes, while Mariani et al [94] proposed a method for creating adversarial attacks on surfaces embedded in 3D space, under weak smoothness assumptions on the perceptibility of the attack.…”
Section: ) Generative Strategiesmentioning
confidence: 99%
“…Zheng et al [38] observe that saliency of the point cloud networks is localized and network rely on a small subset of the signal for the task. This observation has led to extensive work in privacy and security [54,55,56,57] utilizing the localized saliency used to design adversarial attack (and defence) mechanisms on the trained models. We note that our setting significantly differs from adversarial attack work since we protect the dataset that can be used to train arbitrary models while adversarial methods focus on attacking/protecting the robustness of model predictions.…”
Section: Related Workmentioning
confidence: 99%
“…However, this form of robustness quantification only holds for the concrete samples and does not consider adversarial transformations. The latter problem was addressed by a recent line of work that extended the well-studied problem of adversarial attacks for images to the 3D point cloud domain by considering adversarial point perturbation and generation [20,23,28,29,53,60,62], real-world adversarial objects for LIDAR sensors [6], occlusion attacks [55], and adversarial rotations [66]. The adversarial vulnerability of 3D point cloud models has spurred the development of corresponding defense methods, based on perturbation measurement [62], outlier removal and upsampling [67], and adversarial training [28,65].…”
Section: Background and Related Workmentioning
confidence: 99%