2022
DOI: 10.21203/rs.3.rs-1478332/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Differentially private federated deep learning for multi-site medical image segmentation

Abstract: Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer. Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models. However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data. Thus, supplementing FL with privacy-enhancing technologies (PETs) such as differential privacy (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…(10) is valid for the model studied here (of two parabolic bands). Other models -with three bands, or with non-parabolic band dispersion -can give very different sizes of B BI 0 for the same γ [54].…”
Section: Modelmentioning
confidence: 99%
“…(10) is valid for the model studied here (of two parabolic bands). Other models -with three bands, or with non-parabolic band dispersion -can give very different sizes of B BI 0 for the same γ [54].…”
Section: Modelmentioning
confidence: 99%
“…The Differentially Private Gradient Descent (DPGD) approach for semantic segmentation in CT images extends Differential Privacy (DP) guarantees to gradient-based optimisation by clipping gradients using the L2 norms of each minibatch. 24 The optimisation step is then performed after adding Gaussian noise to the averaged minibatch gradients. Despite minor privacy-utility trade-offs, differentially private stochastic gradient descent (DP-SGD) totally defeats privacy-centred attacks, while large-sized models have been shown to be more resilient to model inversion attacks than smaller ones.…”
Section: Research On Fl For Medical Imagingmentioning
confidence: 99%