2021 4th International Conference on Signal Processing and Machine Learning 2021
DOI: 10.1145/3483207.3483224
|View full text |Cite
|
Sign up to set email alerts
|

One-Pixel Attack Deceives Computer-Assisted Diagnosis of Cancer

Abstract: In this article we demonstrate that a state-of-the-art machine learning model predicting whether a whole slide image contains mitosis can be fooled by changing just a single pixel in the input image. Computer vision and machine learning can be used to automate various tasks in cancer diagnostic and detection. If an attacker can manipulate the automated processing, the results can be devastating and in the worst case lead to wrong diagnostic and treatments. In this research one-pixel attack is demonstrated in a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 22 publications
(32 reference statements)
0
10
0
Order By: Relevance
“…A general problem facing the adoption of deep learning methods in clinical tasks is their inherent unreliability ex-emplified by high prediction variation caused by minimal input variation (e.g., one pixel attack [156]). This is further exacerbated by the nontransparent decision making process inside deep neural networks thus often described as 'black box models' [24]; Also, the performance of deep learning methods in out-of-domain datasets has been assessed as unreliable [157].…”
Section: Uncertainty Quantification As Gan Evaluation Metric?mentioning
confidence: 99%
See 1 more Smart Citation
“…A general problem facing the adoption of deep learning methods in clinical tasks is their inherent unreliability ex-emplified by high prediction variation caused by minimal input variation (e.g., one pixel attack [156]). This is further exacerbated by the nontransparent decision making process inside deep neural networks thus often described as 'black box models' [24]; Also, the performance of deep learning methods in out-of-domain datasets has been assessed as unreliable [157].…”
Section: Uncertainty Quantification As Gan Evaluation Metric?mentioning
confidence: 99%
“…The susceptibility of medical imaging deep learn-ing systems to white box and black box adversarial attacks is reviewed [245,243] and investigated [246,228,247] in recent studies. Cancer imaging models show a high level of susceptibility [248,156,249]. Besides imaging data, image quantification features such as radiomics features [250] are also commonly used in cancer imaging as input into CADe and CADx systems.…”
Section: Adversarial Attacks Putting Patients At Riskmentioning
confidence: 99%
“…In our previous publications we introduced how an artificial neural network image classifier model could be fooled by changing only one pixel in the input image [16,17]. Those studies targeted IBM CODAIT MAX breast cancer detector model [18].…”
Section: Data Sourcementioning
confidence: 99%
“…The dataset used in this study contains the one-pixel attack results from the previous study [16], and information of the attacked image, such as the attack pixel's location in the image and the nearby neighboring pixels' color values of the attacked pixel. Attacks that were considered successful were selected from the dataset, mitosis-to-normal attacks, where the confidence score of the neural network was decreased to less than 0.9, was included.…”
Section: Data Sourcementioning
confidence: 99%
See 1 more Smart Citation