2019
DOI: 10.1007/978-3-030-31964-9_17
|View full text |Cite
|
Sign up to set email alerts
|

Risk Susceptibility of Brain Tumor Classification to Adversarial Attacks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…In the literature, there are also studies that used the same database for classification with pre-trained networks [23,[35][36][37][38][39][40] or, as input, they use only tumor region or some features that are extracted from the tumor region [7,21,23,41,42]. Similarly, in several papers, researchers have modified this database prior to classification [36,[43][44][45][46][47]. The designed networks are usually simpler than already-existing pre-trained networks and have faster execution speed.…”
Section: Comparison With State-of-the-art-methodsmentioning
confidence: 99%
“…In the literature, there are also studies that used the same database for classification with pre-trained networks [23,[35][36][37][38][39][40] or, as input, they use only tumor region or some features that are extracted from the tumor region [7,21,23,41,42]. Similarly, in several papers, researchers have modified this database prior to classification [36,[43][44][45][46][47]. The designed networks are usually simpler than already-existing pre-trained networks and have faster execution speed.…”
Section: Comparison With State-of-the-art-methodsmentioning
confidence: 99%
“…An unreliable MedAI output can be fatal in clinical settings, and erroneous results can misalign the overall cycle of future research and healthcare solutions towards the harmful direction. Adversarial attacks on neural networks can cause errors in identifying cancer tumors and damage the confidence in MedAI output (Kotia et al, 2019). Szegedy et al showed that very subtle adversarial inputs, which may not appear as pathological, can potentially change the output (Leung et al, 2015).…”
Section: Security and Integrity Of Medaimentioning
confidence: 99%
“…The vulnerability of brain tumor classification on adversarial attacks was studied by Kotia et al [82]. They applied three different white-box attacks, noise-based attack, FGSM and virtual adversarial training (VAT) [83].…”
Section: Existing Adversarial Attacks On Medical Imagesmentioning
confidence: 99%