2021
DOI: 10.1101/2021.01.17.21249704
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Using Adversarial Images to Assess the Stability of Deep Learning Models Trained on Diagnostic Images in Oncology

Abstract: BackgroundDeep learning (DL) models have shown the ability to automate the classification of medical images used for cancer detection. Unfortunately, recent studies have found that DL models are vulnerable to adversarial attacks which manipulate models into making incorrect predictions with high confidence. There is a need for better understanding of how adversarial attacks impact the predictive ability of DL models in the medical image domain.MethodsWe studied the adversarial attack susceptibility of DL model… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 34 publications
0
3
0
Order By: Relevance
“…These attacks are specifically designed to exploit the local instability of AI models -from logistic regression to deep neural networks to NLP [116]. A prime target is image-based analyses, where input data can be manipulated with small variations, creating adversarial images that result in a significant decrease in accuracy and performance of DL models, despite having imperceptible differences from the original images to the human eye [117][118][119]. Adversarial techniques can also manipulate medical data for financial incentives such as exploiting medical billing and reimbursement systems, or to bias clinical trial outcomes [120].…”
Section: Adversarial Attacksmentioning
confidence: 99%
“…These attacks are specifically designed to exploit the local instability of AI models -from logistic regression to deep neural networks to NLP [116]. A prime target is image-based analyses, where input data can be manipulated with small variations, creating adversarial images that result in a significant decrease in accuracy and performance of DL models, despite having imperceptible differences from the original images to the human eye [117][118][119]. Adversarial techniques can also manipulate medical data for financial incentives such as exploiting medical billing and reimbursement systems, or to bias clinical trial outcomes [120].…”
Section: Adversarial Attacksmentioning
confidence: 99%
“…The susceptibility of medical imaging deep learn-ing systems to white box and black box adversarial attacks is reviewed [245,243] and investigated [246,228,247] in recent studies. Cancer imaging models show a high level of susceptibility [248,156,249]. Besides imaging data, image quantification features such as radiomics features [250] are also commonly used in cancer imaging as input into CADe and CADx systems.…”
Section: Adversarial Attacks Putting Patients At Riskmentioning
confidence: 99%
“…This behaviour has been studied for various tasks, such as classification [29,19], object detection [34] and semantic segmentation [32,2,7], highlighting the importance of reliably assessing robustness. To improve this dimension, adversarial robustness has been studied both from the side of the attacks [22,3,9,24,14] and the defenses [35,27,23,17], obtaining sizable progress towards DNN models resistant to adversarial perturbations; however, still much is left to advance. In particular, recognition tasks in medical domains are of utmost importance for robustness, as these tasks aim at safety-critical applications in which brittleness could have dramatic consequences.…”
Section: Introductionmentioning
confidence: 99%