2021
DOI: 10.48550/arxiv.2110.07435
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adaptive Differentially Private Empirical Risk Minimization

Abstract: We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization. At each iteration, the random noise added to the gradient is optimally adapted to the stepsize; we name this process adaptive differentially private (ADP) learning. Given the same privacy budget, we prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added. Our method is particularly use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 29 publications
0
7
0
Order By: Relevance
“…Bu et al [14] propose the automatic clipping (AUTO clipping) strategy, which replaces the traditional gradient clipping with normalization. Wu et al [15] propose an Adaptive Differentially Private Stochastic Gradient Descent (ADPSGD) algorithm, which adjusts the random noise added to the gradient by adaptive step size. Combining private learning with architectural search, Cheng et al [16] propose the DPNASNet model, which achieves a state-of-the-art privacy/utility trade-off.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Bu et al [14] propose the automatic clipping (AUTO clipping) strategy, which replaces the traditional gradient clipping with normalization. Wu et al [15] propose an Adaptive Differentially Private Stochastic Gradient Descent (ADPSGD) algorithm, which adjusts the random noise added to the gradient by adaptive step size. Combining private learning with architectural search, Cheng et al [16] propose the DPNASNet model, which achieves a state-of-the-art privacy/utility trade-off.…”
Section: Related Workmentioning
confidence: 99%
“…The privacy budget k is the overall privacy budget allocated to all input features. According to Eq (15), the stronger relevant the region is, the more the privacy budget is allocated, and the less noise is added. This result is significant.…”
Section: Perturbation the Input Featuresmentioning
confidence: 99%
See 2 more Smart Citations
“…Abadi et al (2016) propose a tight privacy accounting leading to a reasonable level of privacy cost for DP-SGD. Following that, variants of DP-SGD have been proposed to improve the model accuracy such as clipping based on quantiles (Andrew et al, 2019), dynamic noise power and clipping value (Du et al, 2021), and dynamic learning rate (Wu et al, 2021), etc. DP-SGD and its variants, on the other hand, have had limited success in large deep learning models due to their high computation cost, huge memory overhead, and significant performance drops.…”
Section: Related Workmentioning
confidence: 99%