Harvard Data Science Review 2020
DOI: 10.1162/99608f92.cfc5dd25
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning with Gaussian Differential Privacy

Abstract: Deep learning models are often trained on data sets that contain sensitive information such as individuals' shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural networks subject to privacy constraints that are specified by differential privacy or its divergence-based relaxations. These privacy definitions, however, have weaknesses in handling certain important primitives (composition and subsampling), thereby giving loose or … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
84
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 106 publications
(85 citation statements)
references
References 36 publications
0
84
0
1
Order By: Relevance
“…Cryptography based privacy-preserving methods, such as secure aggregation protocol [5], Homomorphic Encryption (HE) [3,27], Diferential Privacy (DP) [7,9,10] have been developed for privacy-preserving learning, for instance, linear regression model [12,35], decision trees [4,6], deep neural networks [19,20,27]. However, the cryptography operations are computationally expensive, and the consistency of visual patterns of images is generally not guaranteed in encrypted data format, which usually leads to a learning model with poor generalization ability.…”
Section: Related Workmentioning
confidence: 99%
“…Cryptography based privacy-preserving methods, such as secure aggregation protocol [5], Homomorphic Encryption (HE) [3,27], Diferential Privacy (DP) [7,9,10] have been developed for privacy-preserving learning, for instance, linear regression model [12,35], decision trees [4,6], deep neural networks [19,20,27]. However, the cryptography operations are computationally expensive, and the consistency of visual patterns of images is generally not guaranteed in encrypted data format, which usually leads to a learning model with poor generalization ability.…”
Section: Related Workmentioning
confidence: 99%
“…The red lines are obtained via Corollary 4, while the blue dashed lines are produced by the tensorflow/privacy library. See https://github.com/tenso rflow/ privacy for the details of the setting and more experiments in follow-up work (Bu et al, 2019) [Colour figure can be viewed at wileyonlinelibrary.com] where x * is the unique fixed point of f. We will let the sampling fraction p tend to 0 as T approaches infinity. In the following theorem, a 2 + is a short-hand for (max{a, 0}) 2 .…”
Section: Asymptotic Privacy Analysismentioning
confidence: 99%
“…Another PPDL method that utilizes differential privacy is Bu19 [92]. Bu19 proposes Gaussian Differential Privacy (Gaussian DP) which formalizes the original DP technique as a hypothesis test from the adversaries' perspective.The Table 4 shows the features of our surveyed differential privacy-based PPDL.…”
Section: Differential Privacy-based Ppdlmentioning
confidence: 99%