2021
DOI: 10.48550/arxiv.2110.08042
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Attacks on ML Defense Models Competition

Yinpeng Dong,
Qi-An Fu,
Xiao Yang
et al.

Abstract: Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years. However, the progress of building more robust models is usually hampered by the incomplete or incorrect robustness evaluation. To accelerate the research on reliable evaluation of adversarial robustness of the current defense models in image classification, the TSAIL group at Tsinghua University and the Alibaba Security group organ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…Adversarial training. The idea of adversarial training (AT) stems from the seminal work of [783], while other AT frameworks like PGD-AT [866] and TRADES [884] occupied the winner solutions in the adversarial competitions [885,886,887,888]. Based on these primary AT frameworks, many improvements have been proposed via encoding the mechanisms inspired from other domains, including ensemble learning [889,890], metric learning [891,892,893,894], generative modeling [895,896,897,898], weight perturbing [899], semi-supervised learning [900,901,902], and self-supervised learning [903,904,905,906].…”
Section: Defensesmentioning
confidence: 99%
“…Adversarial training. The idea of adversarial training (AT) stems from the seminal work of [783], while other AT frameworks like PGD-AT [866] and TRADES [884] occupied the winner solutions in the adversarial competitions [885,886,887,888]. Based on these primary AT frameworks, many improvements have been proposed via encoding the mechanisms inspired from other domains, including ensemble learning [889,890], metric learning [891,892,893,894], generative modeling [895,896,897,898], weight perturbing [899], semi-supervised learning [900,901,902], and self-supervised learning [903,904,905,906].…”
Section: Defensesmentioning
confidence: 99%