2020
DOI: 10.1038/s42256-020-0186-1
|View full text |Cite
|
Sign up to set email alerts
|

Secure, privacy-preserving and federated machine learning in medical imaging

Abstract: rtificial intelligence (AI) methods have the potential to revolutionize the domain of medicine, as witnessed, for example, in medical imaging, where the application of computer vision techniques, traditional machine learning 1,2 and-more recently-deep neural networks have achieved remarkable successes. This progress can be ascribed to the release of large, curated corpora of images (ImageNet 3 perhaps being the best known), giving rise to performant pre-trained algorithms that facilitate transfer learning and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
433
0
4

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 745 publications
(438 citation statements)
references
References 77 publications
1
433
0
4
Order By: Relevance
“…Several methods for adversarial attacks on black-box DNNs, which estimate adversarial perturbations using only model outputs (e.g., confidence scores), have been proposed [31][32][33]. The development and operation of secure, privacy-preserving, and federated DNNs are required in medical imaging [6].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Several methods for adversarial attacks on black-box DNNs, which estimate adversarial perturbations using only model outputs (e.g., confidence scores), have been proposed [31][32][33]. The development and operation of secure, privacy-preserving, and federated DNNs are required in medical imaging [6].…”
Section: Discussionmentioning
confidence: 99%
“…Complex classifiers, including DNNs, can potentially cause catastrophic harm to society because they are often difficult to interpret [5]. More importantly, DNNs have a number of security concerns [6]; specifically, DNNs are known to be vulnerable to adversarial examples [7,8], which are input images that cause misclassifications by DNNs and typically generated by adding specific, imperceptible perturbations to original input images that have been correctly classified using DNNs. The existence of adversarial examples questions the generalization ability of DNNs, reduces model interpretability, and limits the applications of deep learning in safety-and security-critical environments [9]; in particular, the adversarial examples cause not only misdiagnosis, but also various social disturbances [10].…”
Section: Introductionmentioning
confidence: 99%
“…AI-based solutions intrinsically rely on appropriate algorithms 25 , but even more so on large enough datasets for training purposes 26 . Since the domain of medicine is inherently decentralized, the volume of data available locally is often insufficient to train reliable classifiers [27][28][29] .…”
Section: Introductionmentioning
confidence: 99%
“…As a consequence, centralization of data, for example via cloud solutions, has been one model to address the local limitations [30][31][32] . While beneficial from an AI-perspective, centralized solutions were shown to have other inherent hurdles, including increased data traffic of large medical data, data ownership, privacy and security concerns when ownership is disconnected from access and usage curation and thereby creating data monopolies favoring data aggregators 26 . Consequently, solutions to the challenges of central data models in AI -particular when dealing with medical data -must be effective, with high accuracy and efficiency, privacy-and ethics-preserving, secure, and fault-tolerant by design [33][34][35][36] .…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation