2019
DOI: 10.48550/arxiv.1901.03037
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Image Transformation can make Neural Networks more robust against Adversarial Examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Detection mechanisms have also been extensively explored, ranging from using modular redundancies (e.g., input transformation [10], [24], [67], multiple models [57], and weights randomization [18], [73]), to cascading a dedicated DNN to detect adversaries [43], [42], [21], [45]. Wang et al [71] proposes to spatially share the DNN accelerator resources between the original network and the detection network.…”
Section: Related Work and Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Detection mechanisms have also been extensively explored, ranging from using modular redundancies (e.g., input transformation [10], [24], [67], multiple models [57], and weights randomization [18], [73]), to cascading a dedicated DNN to detect adversaries [43], [42], [21], [45]. Wang et al [71] proposes to spatially share the DNN accelerator resources between the original network and the detection network.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…5.47 = 2.0 x 0.7 + 1.4 x 0.9 + 1.5 x 0.8 + 1.0 x 0.9 + …… 2.0 x 0.7 + 1.4 x 0.9 + 1.5 x 0.8 > 0.6 x 5.47, assuming θ = 0.6 Another class of defenses uses redundancies to defend against adversarial attacks [67], [57], similar to the multi-module redundancy used in classic fault-tolerant systems [62]. This scheme, however, introduces high overhead, limiting its applicability at inference time.…”
Section: Legitimate Sample Adversarial Sample Perturbationmentioning
confidence: 99%
See 1 more Smart Citation