2021
DOI: 10.48550/arxiv.2103.00363
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Tiny Adversarial Mulit-Objective Oneshot Neural Architecture Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…propose Adversarially Robust Distillation (ARD), where they encourage student networks to mimic their teacher's output within an -ball of training samples. Furthermore, there are few NAS methods [32,58,63] that jointly optimises accuracy, latency and robustness. Compared to these methods, similar-sized RNAS-CL models achieve both higher clean and robust accuracy.…”
Section: Efficient and Robust Modelsmentioning
confidence: 99%
“…propose Adversarially Robust Distillation (ARD), where they encourage student networks to mimic their teacher's output within an -ball of training samples. Furthermore, there are few NAS methods [32,58,63] that jointly optimises accuracy, latency and robustness. Compared to these methods, similar-sized RNAS-CL models achieve both higher clean and robust accuracy.…”
Section: Efficient and Robust Modelsmentioning
confidence: 99%
“…Among the existing studies, one type of approach is to use adversarial training during the search process to increase the robustness in the model. For example, some studies [8,20] adopted one-shot NAS technology. When training the supernet, adversarial training is used to enhance the robustness in the model.…”
Section: Related Workmentioning
confidence: 99%