2021
DOI: 10.48550/arxiv.2104.03863
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A single gradient step finds adversarial examples on random two-layers neural networks

Abstract: Daniely and Schacham recently showed that gradient descent finds adversarial examples on random undercomplete two-layers ReLU neural networks. The term "undercomplete" refers to the fact that their proof only holds when the number of neurons is a vanishing fraction of the ambient dimension. We extend their result to the overcomplete case, where the number of neurons is larger than the dimension (yet also subexponential in the dimension). In fact we prove that a single step of gradient descent suffices. We also… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
12
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(12 citation statements)
references
References 6 publications
0
12
0
Order By: Relevance
“…Inspired by the work of Shamir et al [2019], Daniely and Schacham [2020] prove that small 2 norm adversarial perturbations can be found by multi-step gradient descent method for random ReLU networks with small width -each layer has vanishing width compared to the previous layer. Bubeck et al [2021] generalize this result to two-layer randomly initialized networks with relatively large network width and show that a single step of gradient descent suffices to find adversarial examples. Bartlett et al [2021] further generalize this result to random multilayer ReLU networks.…”
Section: Introductionmentioning
confidence: 69%
See 4 more Smart Citations
“…Inspired by the work of Shamir et al [2019], Daniely and Schacham [2020] prove that small 2 norm adversarial perturbations can be found by multi-step gradient descent method for random ReLU networks with small width -each layer has vanishing width compared to the previous layer. Bubeck et al [2021] generalize this result to two-layer randomly initialized networks with relatively large network width and show that a single step of gradient descent suffices to find adversarial examples. Bartlett et al [2021] further generalize this result to random multilayer ReLU networks.…”
Section: Introductionmentioning
confidence: 69%
“…Interestingly, linearity has been hypothesized as key reason for existence of adversarial examples in neural networks Goodfellow et al [2014]. Furthermore, this was used to prove existence of adversarial attacks for random neural networks Bubeck et al [2021], Bartlett et al [2021] and also provide the basis of our analysis in this work.…”
Section: Related Workmentioning
confidence: 82%
See 3 more Smart Citations