2022
DOI: 10.1109/tpami.2020.3044712
|View full text |Cite
|
Sign up to set email alerts
|

Geometry-Aware Generation of Adversarial Point Clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
68
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 73 publications
(68 citation statements)
references
References 41 publications
0
68
0
Order By: Relevance
“…3D adversarial attacks aim to generate 3D adversarial samples in a human-unnoticeable way. 3D adversarial samples usually consist of two types: adversarial point cloud (Xiang et al, 2019;Tsai et al, 2020) and adversarial mesh (Wen et al, 2020;Zhang et al, 2021a). Currently, most 3D adversarial attacks are about point cloud.…”
Section: D Adversarial Attacksmentioning
confidence: 99%
See 3 more Smart Citations
“…3D adversarial attacks aim to generate 3D adversarial samples in a human-unnoticeable way. 3D adversarial samples usually consist of two types: adversarial point cloud (Xiang et al, 2019;Tsai et al, 2020) and adversarial mesh (Wen et al, 2020;Zhang et al, 2021a). Currently, most 3D adversarial attacks are about point cloud.…”
Section: D Adversarial Attacksmentioning
confidence: 99%
“…However, those adversarial point clouds usually contain a lot of outliers, which are not human-unnoticeable. To solve this problem, the following works (Wen et al, 2020;Tsai et al, 2020) focus on generating adversarial point cloud with much less outliers. Tsai et al (2020) proposed kNN attack that aims to generating smooth perturbation by adding chamfer distance and kNN distance to loss function as the regularization terms during optimization.…”
Section: D Adversarial Attacksmentioning
confidence: 99%
See 2 more Smart Citations
“…We are interested in understanding if the use of an additional input prevents adversarial attacks on the other input. Though we could utilize adversarial attacks on the LIDAR input like previous work [11,2,12,13], we instead choose to focus on modifying the image input. This is because we find that the model relies more heavily on LIDAR data and successful attacks using modification of just the LIDAR is more trivial.…”
Section: Introductionmentioning
confidence: 99%