2023
DOI: 10.1007/s11042-023-14702-9
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial examples: attacks and defences on medical deep learning systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 100 publications
0
6
0
Order By: Relevance
“…We can see that the highest F1 score, which is 51.6%, was achieved while detecting the Gaussian-noise-based adversarial attacks. To attack the vision-based ADSs at runtime with different intensities, we adjust the noise term σ and the attack intensity ε in (10). We set five levels of attack intensities to perturb the images in either a multiplicative way or an additive way.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We can see that the highest F1 score, which is 51.6%, was achieved while detecting the Gaussian-noise-based adversarial attacks. To attack the vision-based ADSs at runtime with different intensities, we adjust the noise term σ and the attack intensity ε in (10). We set five levels of attack intensities to perturb the images in either a multiplicative way or an additive way.…”
Section: Discussionmentioning
confidence: 99%
“…The primary objective of the adversarial attacks is to take into account the model's vulnerabilities and craft adversarial input to fool the DNN model into producing incorrect results [5][6][7][8]. Various adversarial attack strategies have recently been proposed to fool the DNN model into producing incorrect results in different application domains [9][10][11][12][13][14]. Among these attack strategies, DeepSearch [15], DeepFool [16], projected gradient descent (PGD) [17], and the fast gradient sign method (FGSM) [2] are commonly adopted methods for adversarial attacks.…”
Section: Introductionmentioning
confidence: 99%
“…Several attacks have been introduced by previous research works, such as the Fast Sign Gradient Method [7], DeepFool [8], Carlini and Wagner attack [9], JSMA [10], One Pixel Attack [6]. Several surveys related to adversarial machine learning attacks including [11]- [13] have been published. The surveys focused on various aspects such as attack type [11], adversarial attacks in realworld scenarios [11], adversarial examples [12] and adversarial robustness from the interpretability perspective and attacks in specific domains (i.e., medical domain) [13].…”
Section: Related Workmentioning
confidence: 99%
“…Several surveys related to adversarial machine learning attacks including [11]- [13] have been published. The surveys focused on various aspects such as attack type [11], adversarial attacks in realworld scenarios [11], adversarial examples [12] and adversarial robustness from the interpretability perspective and attacks in specific domains (i.e., medical domain) [13].…”
Section: Related Workmentioning
confidence: 99%
“…However, the performance of CNNs is highly influenced by the image quality, object visibility, and other conditions during training [1]. For instance, in a medical application, such as recognizing the surgical tools in laparoscopic surgery video streams, some visual challenges highly influence the CNNs' performance for object classification [1,2]. These visual challenges are quite common in real medical applications, i.e., the surgical tools may be occluded by tissue or smoke may be generated during surgery, lenses are stained by blood, and motion blur is caused by movement or unstable camera position [1].…”
Section: Introductionmentioning
confidence: 99%