2019
DOI: 10.1609/aaai.v33i01.33014536
|View full text |Cite
|
Sign up to set email alerts
|

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

Abstract: Many modern machine learning classifiers are shown to be vulnerable to adversarial perturbations of the instances. Despite a massive amount of work focusing on making classifiers robust, the task seems quite challenging. In this work, through a theoretical study, we investigate the adversarial risk and robustness of classifiers and draw a connection to the well-known phenomenon of "concentration of measure" in metric measure spaces. We show that if the metric probability space of the test instance is concentra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
87
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 84 publications
(90 citation statements)
references
References 15 publications
3
87
0
Order By: Relevance
“…The setup holds asymptotically with respect to the representational dimension. In the context of metric probability spaces, Mahloujifar et al [33] blend the aforementioned approaches, by arguing that the isoperimetric inequalities hold approximately over a much larger distribution family as the dimension m goes to infinity.…”
Section: Differences With Previous Studiesmentioning
confidence: 99%
“…The setup holds asymptotically with respect to the representational dimension. In the context of metric probability spaces, Mahloujifar et al [33] blend the aforementioned approaches, by arguing that the isoperimetric inequalities hold approximately over a much larger distribution family as the dimension m goes to infinity.…”
Section: Differences With Previous Studiesmentioning
confidence: 99%
“…In such so-called evasion attacks [BFR14, CW17, SZS + 14, GMP18] that find "adversarial examples", the goal of the adversary is to perturb the test input x into a "close" input x under some metric d (perhaps because this small perturbation is imperceptible to humans) in such a way that this tampering makes the hypothesis h make a mistake. In [MDM19], it was also shown that the concentration of measure can potentially lead to inherent evasion attacks, as long as the input metric probability space (X , d, µ) is concentrated. This holds e.g., if the space is a Normal Lévy family [Lév51,AM85].…”
Section: Polynomial-time Attacks On Robust Learningmentioning
confidence: 99%
“…Most notably, Katz et al [9] and Weng et al [21] have found that computing a provably secure region of the input space is approximately computationally hard. Mahloujifar et al [14] explain the prevalence of adversarial examples by making a connection to the "concentration of measure" in metric spaces. Recently, Zhange et al [23] found that adversaries are more dense sufficiently far from the manifold of training data.…”
Section: Related Workmentioning
confidence: 99%