2018
DOI: 10.48550/arxiv.1805.00216
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Privately Learning High-Dimensional Distributions

Abstract: We present novel, computationally efficient, and differentially private algorithms for two fundamental high-dimensional learning problems: learning a multivariate Gaussian and learning a product distribution over the Boolean hypercube in total variation distance. The sample complexity of our algorithms nearly matches the sample complexity of the optimal non-private learners for these tasks in a wide range of parameters, showing that privacy comes essentially for free for these problems. In particular, in contr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
15
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(17 citation statements)
references
References 17 publications
2
15
0
Order By: Relevance
“…statistical setting. [21] studied Gaussian mean estimation but did not obtain a tight bound with respect to δ.…”
Section: 3mentioning
confidence: 99%
See 3 more Smart Citations
“…statistical setting. [21] studied Gaussian mean estimation but did not obtain a tight bound with respect to δ.…”
Section: 3mentioning
confidence: 99%
“…To demonstrate the abstract formulation in Theorem 2.1, we apply it to prove a concrete result, Theorem 2.2, that improved a similar lower bound in [21] by log(1/δ): when the sample size is n, any (ε, δ)-differentially private estimator of a d-dimensional sub-Gaussian mean vector must incur an extra error of at least the order of d log(1/δ)/nε, in addition to the standard d/n statistical error. The utility of Theorem 2.1 is further shown in Theorems 3.1, 4.1 and 4.3, for high-dimensional sparse mean estimation as well as linear regression in both low-dimensional and high-dimensional settings.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, there are also evidences showing that the loss of some problems under the privacy constraints can be quite small compared with their non-private counterparts. Examples of such nature include high dimensional sparse PCA [8], sparse inverse covariance estimation [9], and high-dimensional distributions estimation [10]. Thus, it is desirable to determine which high dimensional problem can be learned or estimated efficiently in a private manner.…”
Section: Introductionmentioning
confidence: 99%