Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2022
DOI: 10.1145/3534678.3539241
|View full text |Cite
|
Sign up to set email alerts
|

Availability Attacks Create Shortcuts

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
29
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(29 citation statements)
references
References 13 publications
0
29
0
Order By: Relevance
“…This lets CUDA generation add higher amounts of noise in the image space, specifically along the edges in images, and makes it resilient to AT with small additive noise budgets. In Figure 2, with the help of t-SNE plots [51], we also find that the noises added by CUDA generation is not linearly separable while they are linearly separable for the existing works on unlearnability [54].…”
Section: Introductionmentioning
confidence: 93%
See 1 more Smart Citation
“…This lets CUDA generation add higher amounts of noise in the image space, specifically along the edges in images, and makes it resilient to AT with small additive noise budgets. In Figure 2, with the help of t-SNE plots [51], we also find that the noises added by CUDA generation is not linearly separable while they are linearly separable for the existing works on unlearnability [54].…”
Section: Introductionmentioning
confidence: 93%
“…Fowl et al [10] show that error-maximizing noises as well can make strong poison attacks. However, all these poisoning techniques do not offer data protection with AT [49,54]. Fu et al [11] proposes a min-min-max optimization technique to generate poisoned data that offers better unlearnability effects with AT.…”
Section: Related Workmentioning
confidence: 99%
“…The training objective is designed to create a spurious correlation between the noisy image and the ground-truth labels. Concurrently, Yu et al [65] empirically investigate various types of availability attacks and show that almost all of them leverage these spurious features to create a shortcut within neural networks [17]. Yu et al [65] then propose a fast and scalable approach for perturbation generation by generating randomly-initialized linearly-separable perturbations which can generate availability attacks for an entire dataset in a few seconds.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, there has been an increasing number of studies on hindering the unauthorized use of personal data for neural network image classifiers [13,28,66,15,16,65,56,45]. These methods tend to add an imperceptible amount of noise to the clean images so that while the data has the same appearance as the ground-truth, it cannot provide any meaningful patterns for the neural networks to learn.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation