2019
DOI: 10.29012/jpc.660
|View full text |Cite
|
Sign up to set email alerts
|

Differentially Private Confidence Intervals for Empirical Risk Minimization

Abstract: The process of data mining with differential privacy produces results that are affected by two types of noise: sampling noise due to data collection and privacy noise that is designed to prevent the reconstruction of sensitive information. In this paper, we consider the problem of designing confidence intervals for the parameters of a variety of differentially private machine learning models. The algorithms can provide confidence intervals that satisfy differential privacy (as well as the more recently propose… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(25 citation statements)
references
References 28 publications
0
25
0
Order By: Relevance
“…Few studies address differentially private confidence intervals and hypothesis tests specifically for regression coefficients. Wang et al (2018) developed confidence intervals based on Chaudhuri et al's (2011) techniques with satisfactory empirical coverage probability for data sets with independent and identically distributed records. However, these rely on asymptotic normality of the gradient of the objective function, which may not be satisfied for sample sizes smaller than those used in the study's empirical evaluation, where n was 30 000 or more.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Few studies address differentially private confidence intervals and hypothesis tests specifically for regression coefficients. Wang et al (2018) developed confidence intervals based on Chaudhuri et al's (2011) techniques with satisfactory empirical coverage probability for data sets with independent and identically distributed records. However, these rely on asymptotic normality of the gradient of the objective function, which may not be satisfied for sample sizes smaller than those used in the study's empirical evaluation, where n was 30 000 or more.…”
Section: Discussionmentioning
confidence: 99%
“…Unlike the techniques of Sheffet (2017), these do not require boundedness assumptions on the underlying data. For more general regression models, Wang et al (2018) developed confidence intervals satisfying zero‐concentrated differential privacy for coefficients produced by the regularised methods of Chaudhuri et al (2011). They exploit a large‐sample normal approximation to the gradient of the objective function at its optimum, under the assumption that data set rows are randomly sampled.…”
Section: Confidence Intervals and Hypothesis Testsmentioning
confidence: 99%
“…More recently, Duchi et al (2018) release differentially private estimators for means, medians, generalized linear models, and non-parametric densities. Wang et al (2019) generate confidence intervals for parameters obtained from differentially private empirical risk minimization machine learning models. Other researchers found that differential privacy-part of our goal-was too strong a requirement for meaningful data utility on highly detailed data sets (Abowd et al 2013;Bambauer et al 2013); however, recent research is more promising.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Previous work in [21,48,58] has investigated how to accurately compute the test statistics in hypothesis testing while using DP to protect data. Ding et al designed an algorithm to detect privacy violations of DP mechanisms from a hypothesis testing perspective [12].…”
Section: Hypothesis Testing In Dpmentioning
confidence: 99%