Proceedings of the 33rd Annual Computer Security Applications Conference 2017
DOI: 10.1145/3134600.3134606
|View full text |Cite
|
Sign up to set email alerts
|

Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

Abstract: Deep neural networks (DNNs) have transformed several arti cial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Speci cally, suppose we have a testing example, whose label can be correctly predicted by a DNN classi er. An a acker can add a small carefully cra ed noise to the testing example such that the DNN classi er predicts an incorrect label,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
143
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 163 publications
(144 citation statements)
references
References 26 publications
1
143
0
Order By: Relevance
“…Randomized smoothing was initially proposed as empirical defenses [6,26] without formal certified robustness guarantees. For example, Cao & Gong [6] proposed to use uniform noise sampled from a hypercube centered at an input. Lecuyer et al [20] derived the first certified robustness guarantee for randomized smoothing with Gaussian or Laplacian noise by utilizing differential privacy techniques.…”
Section: Related Workmentioning
confidence: 99%
“…Randomized smoothing was initially proposed as empirical defenses [6,26] without formal certified robustness guarantees. For example, Cao & Gong [6] proposed to use uniform noise sampled from a hypercube centered at an input. Lecuyer et al [20] derived the first certified robustness guarantee for randomized smoothing with Gaussian or Laplacian noise by utilizing differential privacy techniques.…”
Section: Related Workmentioning
confidence: 99%
“…We then checked whether the input labeled will be flipped in the perturbed network using Eq. (8). In the figures, green and red points represent non-flippable and flippable inputs, respectively.…”
Section: Resultsmentioning
confidence: 99%
“…Cao and Gong aimed to increase the robustness of neural networks [30]. Instead of using only a test sample to determine which class it belongs to, hundreds of neighboring samples are generated in the surrounding hypercube and a voting algorithm is applied to decide the true label of the test sample.…”
Section: B Decision Related Topicsmentioning
confidence: 99%
“…Inspired by this region-based classification method [30], He et al [23] moved from hypercubes to larger neighborhoods. They proposed an orthogonal-direction ensemble attack called OptMargin, which could evade the region-based classification defense mentioned in [30].…”
Section: B Decision Related Topicsmentioning
confidence: 99%
See 1 more Smart Citation