2018
DOI: 10.48550/arxiv.1805.11770
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
39
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 19 publications
(39 citation statements)
references
References 17 publications
0
39
0
Order By: Relevance
“…• ZOO (Chen et al, 2017b) (and also (Liu et al, 2017;Bhagoji et al, 2018;Tu et al, 2018)) numerically estimates gradients and then performs gradient descent, making it powerful but potentially ineffective when the loss surface is difficult to optimize over.…”
Section: Apply Gradient-free Attacksmentioning
confidence: 99%
“…• ZOO (Chen et al, 2017b) (and also (Liu et al, 2017;Bhagoji et al, 2018;Tu et al, 2018)) numerically estimates gradients and then performs gradient descent, making it powerful but potentially ineffective when the loss surface is difficult to optimize over.…”
Section: Apply Gradient-free Attacksmentioning
confidence: 99%
“…The loss L(ĝ) is the minimum expected squared 2 distance between the true gradient ∇f (x) and scaled estimator bĝ. The previous work [32] also uses the expected squared 2 distance E ∇f (x) − ĝ 2 2 as the loss function, which is similar to ours. However, the value of this loss function will change with different magnitude of the estimator ĝ.…”
Section: Gradient Estimation Frameworkmentioning
confidence: 93%
“…Many methods [27,6,3,7,16,24,32,17] have been proposed to perform black-box adversarial attacks. A common idea is to use an approximate gradient instead of the true gradient for crafting adversarial examples.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…STO aims to secure a system by deliberately hiding or concealing its security flaws Van Oorschot [2003]. Interestingly, some recent works have shown that even in such a "black-box" setting, it is possible to fool the ML classifier with a high probability Chen et al [2017b], Papernot et al [2017], Bhagoji et al [2017], , Tu et al [2018]. These black-box attacks can be broadly classified in two categories: 1) knowledge transfer based attacks, and, 2) zeroth-order optimization based attacks.…”
Section: Introductionmentioning
confidence: 99%