2020
DOI: 10.48550/arxiv.2007.07236
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multitask Learning Strengthens Adversarial Robustness

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…Many defense methods against adversarial examples have been proposed to protect deep learning models [18][19][20][21][22][23][24][25].…”
Section: Defense Methods Against Adversarial Examplesmentioning
confidence: 99%
“…Many defense methods against adversarial examples have been proposed to protect deep learning models [18][19][20][21][22][23][24][25].…”
Section: Defense Methods Against Adversarial Examplesmentioning
confidence: 99%
“…Multiple defense mechanisms have been proposed to protect deep learning models from the threat of adversarial examples [21][22][23][24][25][26][27][28]. Among these, adversarial training is the most effective way to improve model robustness [6,29] [30].…”
Section: Adversarial Defense Methodsmentioning
confidence: 99%
“…Furthermore, learning features for multiple tasks can act as a regularizer, improving generalization. Mao et al [4] illustrated that multi-task learning improves adversarial robustness, which is critical for safety applications.…”
Section: Introductionmentioning
confidence: 99%