2021
DOI: 10.1109/jproc.2021.3050042
|View full text |Cite
|
Sign up to set email alerts
|

Optimism in the Face of Adversity: Understanding and Improving Deep Learning Through Adversarial Robustness

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 42 publications
(22 citation statements)
references
References 78 publications
0
22
0
Order By: Relevance
“…Hence, f θ is expected to learn more task-relevant features with AT. While AT was originally designed to increase the robustness of deep networks to adversarial perturbations [19,20], it has also contributed to other tasks [27]. Recently, it was shown that AT in the source domain can improve transfer learning [14,15]: adversarially trained models from a source domain can help improving the accuracy on the target task after fine-tuning, despite performing worse, in terms of task accuracy, on the source domain.…”
Section: Target Task and Training Strategiesmentioning
confidence: 99%
“…Hence, f θ is expected to learn more task-relevant features with AT. While AT was originally designed to increase the robustness of deep networks to adversarial perturbations [19,20], it has also contributed to other tasks [27]. Recently, it was shown that AT in the source domain can improve transfer learning [14,15]: adversarially trained models from a source domain can help improving the accuracy on the target task after fine-tuning, despite performing worse, in terms of task accuracy, on the source domain.…”
Section: Target Task and Training Strategiesmentioning
confidence: 99%
“…However, the practical application of DNNs to disease diagnosis may still be debatable owing to the existence of adversarial examples [8][9][10] ; these are input images that are typically generated by adding specific, imperceptible perturbations to the original input images, leading to DNN misclassification. Given that diagnosing disease involves making highstake decisions, the existence of adversarial examples is a security concern 11 .…”
Section: Introductionmentioning
confidence: 99%
“…A simple solution to avoid adversarial attacks is to render training data and any other similar domain-specific data (e.g., medical images in the case of medical image classification) publicly unavailable because various methods of adversarial attacks [8][9][10] (from attack methods that assume access to DNN model weights to those that do not) generally assume the use of such data to generate adversarial perturbations. Given that the data availability of medical images is generally limited in terms of security and privacy preservation 11 , adversarial attacks on DNN-based medical image classifications seem to be limited.…”
Section: Introductionmentioning
confidence: 99%
“…However, the practical application of DNNs to disease diagnosis may still be debatable owing to the existence of adversarial examples [8][9][10]; these are input images that are typically generated by adding specific, imperceptible perturbations to the original input images, leading to DNN misclassification. Given that diagnosing disease involves making high-stake decisions, the existence of adversarial examples is a security concern [11].…”
Section: Introductionmentioning
confidence: 99%