2020 International Joint Conference on Neural Networks (IJCNN) 2020
DOI: 10.1109/ijcnn48605.2020.9206959
|View full text |Cite
|
Sign up to set email alerts
|

Detection of Adversarial Examples in Deep Neural Networks with Natural Scene Statistics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(18 citation statements)
references
References 17 publications
1
17
0
Order By: Relevance
“…5. It is noted that (1) SVM with the 18-D NSS feature may fail to generalize due to insufficient sampling (hence the below-diagonal ROC); (2) NSS performs better for small ε, but performance saturates with larger ε, because NSS does not incorporate any cue from network gradient behavior; (3) small ε is difficult for ARC, but its performance soars with larger ε towards 100%, which is consistent and expected from our visualization; (4) SVM with ARCv can generalize against all PGD-like attacks, while NSS failed for MIM; (5) SVM with NSS may generalize against some non-PGD-like attacks [29], while ARC could not due to SAE uniqueness; (6) SVM with the 2-D NSS feature ("Method 2" in [29]) fails to generalize. Thus, ARC achieves competitive performance consistently across different settings despite the extreme limits, because the ARC feature is low-dimensional, and incorporates cue from network gradient behavior.…”
Section: Comparison With Previous Attack Detection Methodssupporting
confidence: 80%
See 3 more Smart Citations
“…5. It is noted that (1) SVM with the 18-D NSS feature may fail to generalize due to insufficient sampling (hence the below-diagonal ROC); (2) NSS performs better for small ε, but performance saturates with larger ε, because NSS does not incorporate any cue from network gradient behavior; (3) small ε is difficult for ARC, but its performance soars with larger ε towards 100%, which is consistent and expected from our visualization; (4) SVM with ARCv can generalize against all PGD-like attacks, while NSS failed for MIM; (5) SVM with NSS may generalize against some non-PGD-like attacks [29], while ARC could not due to SAE uniqueness; (6) SVM with the 2-D NSS feature ("Method 2" in [29]) fails to generalize. Thus, ARC achieves competitive performance consistently across different settings despite the extreme limits, because the ARC feature is low-dimensional, and incorporates cue from network gradient behavior.…”
Section: Comparison With Previous Attack Detection Methodssupporting
confidence: 80%
“…(2) non-intrusive; (3) data-undemanding, the most relevant methods that do not lack of ImageNet evaluation are [29,30,31,32,33]. But [30,31,32,33] still require a considerable amount of data to build accurate (relatively) high-dimensional statistics.…”
Section: Comparison With Previous Attack Detection Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…It is a more intuitive idea to prevent the adversarial examples from attacking the model by detecting the input. The detection of adversarial examples has achieved some good research results in the image field [25], [26]. In the text, using certain methods to generate adversarial examples will produce spelling errors.…”
Section: Related Workmentioning
confidence: 99%