2021
DOI: 10.14778/3484224.3484231
|View full text |Cite
|
Sign up to set email alerts
|

Quantifying identifiability to choose and audit ϵ in differentially private deep learning

Abstract: Differential privacy allows bounding the influence that training data records have on a machine learning model. To use differential privacy in machine learning, data scientists must choose privacy parameters (ϵ, δ ). Choosing meaningful privacy parameters is key, since models trained with weak privacy parameters might result in excessive privacy leakage, while strong privacy parameters might overly degrade model utility. However, privacy parameter values are difficult to choose for two … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 27 publications
0
0
0
Order By: Relevance
“…DP formulates a privacy bound on the ratio of probability distributions around D and D resulting from a mechanism. The privacy bound holds for an adversary with auxiliary knowledge of up to all but one record in the dataset [44,45]. Yeom et al [32] demonstrate that the privacy bound can be transformed into an upper bound on the membership advantage of an MI adversary.…”
Section: Privacy Metrics and Boundsmentioning
confidence: 99%
“…DP formulates a privacy bound on the ratio of probability distributions around D and D resulting from a mechanism. The privacy bound holds for an adversary with auxiliary knowledge of up to all but one record in the dataset [44,45]. Yeom et al [32] demonstrate that the privacy bound can be transformed into an upper bound on the membership advantage of an MI adversary.…”
Section: Privacy Metrics and Boundsmentioning
confidence: 99%