2021 29th Conference of Open Innovations Association (FRUCT) 2021
DOI: 10.23919/fruct52173.2021.9435562
|View full text |Cite
|
Sign up to set email alerts
|

Color-Optimized One-Pixel Attack Against Digital Pathology Images

Abstract: Modern artificial intelligence based medical imaging tools are vulnerable to model fooling attacks. Automated medical imaging methods are used for supporting the decision making by classifying samples as regular or as having characters of abnormality. One use of such technology is the analysis of whole-slide image tissue samples. Consequently, attacks against artificial intelligence based medical imaging methods may diminish the credibility of modern diagnosis methods and, at worst, may lead to misdiagnosis wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…Their method identifies vulnerable pixels by analyzing discrepancies in confidence scores. Similarly, Korpihalkola et al [17] employed the differential evolution technique for digital pathology images, building upon the foundation laid by [5]. While previous endeavors [5,6,17] proposed heuristic methodologies utilizing differential evolution to identify one-pixel attacks, that of Nam et al [18] was based on rigorous experimentation on the MNIST dataset and proposed an adjustable exhaustive search method, leveraging parallelism in conjunction with the inherent properties of one-pixel attacks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Their method identifies vulnerable pixels by analyzing discrepancies in confidence scores. Similarly, Korpihalkola et al [17] employed the differential evolution technique for digital pathology images, building upon the foundation laid by [5]. While previous endeavors [5,6,17] proposed heuristic methodologies utilizing differential evolution to identify one-pixel attacks, that of Nam et al [18] was based on rigorous experimentation on the MNIST dataset and proposed an adjustable exhaustive search method, leveraging parallelism in conjunction with the inherent properties of one-pixel attacks.…”
Section: Related Workmentioning
confidence: 99%
“…As a representative instance among a number of experimental results, Figure 4 shows the confidence score of ResNet for input 384 (bird) of CIFAR-10. Figure 4a presents how the confidence score for the target class dog changes when for the pixel (17,14) of the input 384, the blue value is fixed at 0 and the red and green values are changed. Additionally, Figure 4b-aa present the confidence scores by fixing the blue values at 10 to 255, respectively.…”
Section: Confidence Scorementioning
confidence: 99%
“…In our previous publications we introduced how an artificial neural network image classifier model could be fooled by changing only one pixel in the input image [16,17]. Those studies targeted IBM CODAIT MAX breast cancer detector model [18].…”
Section: Data Sourcementioning
confidence: 99%
“…That first technical attack was a success, but the pixel changes in the images were quite easily observable by a human. It seemed that the attack was not realistic or comprehensive for real-world attackers, so we decided to further develop the attack methodology [17].…”
Section: Introductionmentioning
confidence: 99%
“…That first technical attack was a success, but the pixel changes in the images were quite easily observable by a human. It seemed that the attack was not realistic or comprehensive for real-world attackers, so we decided to further develop the attack methodology [4].…”
Section: Introductionmentioning
confidence: 99%