2021
DOI: 10.1109/rbme.2020.3013489
|View full text |Cite
|
Sign up to set email alerts
|

Secure and Robust Machine Learning for Healthcare: A Survey

Abstract: Recent years have witnessed widespread adoption of machine learning (ML)/deep learning (DL) techniques due to their superior performance for a variety of healthcare applications ranging from the prediction of cardiac arrest from onedimensional heart signals to computer-aided diagnosis (CADx) using multi-dimensional medical images. Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
186
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 382 publications
(189 citation statements)
references
References 148 publications
(154 reference statements)
1
186
0
2
Order By: Relevance
“…Each defense method presented in the literature so far has been shown resilient to a particular attack which is realized in specific, settings and it fails to withstand for yet stronger and unseen attacks. Therefore, the development of adversarially robust ML/DL models remains an open research problem, while the literature suggests that worst-case robustness analysis should be performed while considering adversarial ML settings ( Qayyum et al, 2020a ; Qayyum et al, 2020b ; Ilahi et al, 2020 ). In addition, it has been argued in the literature that most of ML developers and security incident responders are unequipped with the required tools for securing industry-grade ML systems against adversarial ML attacks Kumar et al (2020) .…”
Section: Open Research Issuesmentioning
confidence: 99%
“…Each defense method presented in the literature so far has been shown resilient to a particular attack which is realized in specific, settings and it fails to withstand for yet stronger and unseen attacks. Therefore, the development of adversarially robust ML/DL models remains an open research problem, while the literature suggests that worst-case robustness analysis should be performed while considering adversarial ML settings ( Qayyum et al, 2020a ; Qayyum et al, 2020b ; Ilahi et al, 2020 ). In addition, it has been argued in the literature that most of ML developers and security incident responders are unequipped with the required tools for securing industry-grade ML systems against adversarial ML attacks Kumar et al (2020) .…”
Section: Open Research Issuesmentioning
confidence: 99%
“…Reproducing medical or clinical studies will be necessary to gain mainstream adoption of GAN produced SD and dispel the scepticism it is generally met with. The medical domain is known for its slow pace in adopting new technologies and predictive ML is still far from meeting its full implementation potential (Qayyum et al, 2020). Medical professionals care foremost about the well-being of their patients and will only consider results obtained from synthetic data if they have the assurance that they are valid (Rankin et al, 2020).…”
Section: Benchmarking a Prioritymentioning
confidence: 99%
“…In addition to attacks, errors could also stem from noise [10,11], improper annotation [12], bias [13][14][15] and more. The omission of certain data types and lack of insertion of enough variety of data to enable the production of reliable results are also considered as causing errors [16,17].…”
Section: Introductionmentioning
confidence: 99%