2021
DOI: 10.1186/s12880-020-00530-y
|View full text |Cite
|
Sign up to set email alerts
|

Universal adversarial attacks on deep neural networks for medical image classification

Abstract: Background Deep neural networks (DNNs) are widely investigated in medical image classification to achieve automated support for clinical diagnosis. It is necessary to evaluate the robustness of medical DNN tasks against adversarial attacks, as high-stake decision-making will be made based on the diagnosis. Several previous studies have considered simple adversarial attacks. However, the vulnerability of DNNs to more realistic and higher risk attacks, such as universal adversarial perturbation (… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
79
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 110 publications
(79 citation statements)
references
References 25 publications
0
79
0
Order By: Relevance
“…Additionally, the authors from the study [68] demonstrated that pre-trained models increase the adversarial transferability and inequality of data/model decreases attack's efficacy. On the other hand, Hirano et al [95] discovered that the transferability rate is low on non-targeted attacks. Also, two interesting observations have been done by Kovalev et al [89].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, the authors from the study [68] demonstrated that pre-trained models increase the adversarial transferability and inequality of data/model decreases attack's efficacy. On the other hand, Hirano et al [95] discovered that the transferability rate is low on non-targeted attacks. Also, two interesting observations have been done by Kovalev et al [89].…”
Section: Discussionmentioning
confidence: 99%
“…Hirano et al [95] investigated universal adversarial attacks on DNNs for skin cancer diabetic retinopathy and pneumonia classification. They experimented in both targeted and untargeted attacks with several models such as VGG16, VGG19, InceptionResNetV2, DenseNet169, DenseNet121, and ResNet50.…”
Section: Existing Adversarial Attacks On Medical Imagesmentioning
confidence: 99%
“…In some studies, adversarial training improved DL model robustness for multiple medical imaging modalities like lung CT and retinal optical coherence tomography (37,39,40). On the other hand, Hirano et al found that adversarial training generally did not increase model robustness for classifying dermatoscopic images, optical coherence tomography images, and chest X-ray images (41). The difference in effectiveness of adversarial training can be attributed to differences in adversarial training protocols (e.g., single-step vs. iterative approaches).…”
Section: Discussionmentioning
confidence: 99%
“…Universal perturbations attacks (UPA), which used iterative algorithms for targeted and non-targeted attacks, were proposed by [56], and achieved 80% accuracy in classification. Reference [57] presented two lightweight techniques, which used local perturbation and universal attacks.…”
Section: Related Workmentioning
confidence: 99%