2019
DOI: 10.48550/arxiv.1911.11607
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Learning with Gaussian Differential Privacy

Abstract: Deep learning models are often trained on datasets that contain sensitive information such as individuals' shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural networks subject to privacy constraints that are specified by differential privacy or its divergence-based relaxations. These privacy definitions, however, have weaknesses in handling certain important primitives (composition and subsampling), thereby giving loose or c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(29 citation statements)
references
References 44 publications
0
29
0
Order By: Relevance
“…We now introduce the DP optimizers [2,1] to train the DP neural networks. One popular optimizer is the DP-SGD [55,16,3,9] in Algorithm 1 and more optimizers such as DP-Adam can be found in Appendix F. In contrast to the standard SGD, the DP-SGD has two unique steps: the per-sample clipping (to guarantee the sensitivity of per-sample gradients) and the random noise addition (to guarantee the privacy of models), both discussed in details via the Gaussian mechanism in Lemma 5.2.…”
Section: Differentially Private Gradient Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…We now introduce the DP optimizers [2,1] to train the DP neural networks. One popular optimizer is the DP-SGD [55,16,3,9] in Algorithm 1 and more optimizers such as DP-Adam can be found in Appendix F. In contrast to the standard SGD, the DP-SGD has two unique steps: the per-sample clipping (to guarantee the sensitivity of per-sample gradients) and the random noise addition (to guarantee the privacy of models), both discussed in details via the Gaussian mechanism in Lemma 5.2.…”
Section: Differentially Private Gradient Methodsmentioning
confidence: 99%
“…For the same differentially private mechanism, different privacy accountants (e.g., Moments accountant [3,13], Gaussian differential privacy (GDP) [24,9], Fourier accountant [38], each based on a different composition theory) accumulate the privacy risk (σ, n, p, δ, T ) differently over T iterations. The next result shows that DP optimizers with global clipping is as private as those with local clipping, independent of the choice of the privacy accountant.…”
Section: Dp Optimizers Privacy Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…However, there may exist strategic users who can manipulate this process to achieve their own interests, e.g., by uploading a fake model update xi = x i where x i is the true model update (omitting subscript t for notation simplicity). We account for two main types of strategic user behaviors in FL: 1) free riding [6], which generates random model parameters without actually training to save training costs, such as computing power and storage; and 2) overly privacy-preserving, which adds excessive noises to the model parameters for privacy protection [11]- [13]).…”
Section: B Undesirable User Strategiesmentioning
confidence: 99%