2016
DOI: 10.48550/arxiv.1603.07294
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Theory and Practice of Privacy-Preserving Bayesian Data Analysis

Abstract: Bayesian inference has great promise for the privacy-preserving analysis of sensitive data, as posterior sampling automatically preserves differential privacy, an algorithmic notion of data privacy, under certain conditions (Dimitrakakis et al., 2014;Wang et al., 2015b). While this one posterior sample (OPS) approach elegantly provides privacy "for free," it is data inefficient in the sense of asymptotic relative efficiency (ARE). We show that a simple alternative based on the Laplace mechanism, the workhorse … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
27
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(28 citation statements)
references
References 11 publications
1
27
0
Order By: Relevance
“…For example, Wang et al [36] and Dimitrakakis et al [38] first demonstrated that, when meeting certain conditions of the loss function, the posterior Bayesian sampling and the stochastic gradient Markov Chain Monte Carlo (MCMC) techniques could possess some inherent privacy guarantee without the introduction of extra noise. Foulds et al [40] further extended this conclusion to the general MCMC methods. By utilizing the inherent randomness of MCMC, they achieved a certain level of privacy protection equivalent to that assured by a Laplace mechanism.…”
Section: Intrinsic Privacy Of Randomized Algorithmmentioning
confidence: 84%
See 1 more Smart Citation
“…For example, Wang et al [36] and Dimitrakakis et al [38] first demonstrated that, when meeting certain conditions of the loss function, the posterior Bayesian sampling and the stochastic gradient Markov Chain Monte Carlo (MCMC) techniques could possess some inherent privacy guarantee without the introduction of extra noise. Foulds et al [40] further extended this conclusion to the general MCMC methods. By utilizing the inherent randomness of MCMC, they achieved a certain level of privacy protection equivalent to that assured by a Laplace mechanism.…”
Section: Intrinsic Privacy Of Randomized Algorithmmentioning
confidence: 84%
“…A direct method to achieve DP in the CGS-based LDA algorithm is to add noise to the inner statistics [40][17], e.g., word counts n t k and n k m , based on which the sampling probability would be computed to perform topic sampling. In [40], the authors provide a general method to achieve DP in Gibbs Sampling, i.e., adding Laplace noise to the sufficient statistics, n t k and n k m in LDA at the beginning of the Gibbs Sampling process. [17] proposes to add Laplace noise to n t k and n k m in the final iteration.…”
Section: A Limitations Of the Existing Methodsmentioning
confidence: 99%
“…Assume we model the locations according to a LGCP with intensity λ(•) given by ( 2) with a spatial random field defined as in (4). As shown in Foulds et al [10], releasing one sample from the posterior distribution π(λ|S) is 2C-differentially private for any prior, provided…”
Section: Differential Privacymentioning
confidence: 98%
“…To see this note that (10) can equivalently be expressed as λ † (s) = exp (ν(s)) λ(s), where λ(s) denotes the intensity surface with plug-in posterior mean estimates β and ŵ. We also note that ANS can be viewed as an extension of the synthesis method proposed by Quick et al [17], in which synthetic locations were generated based on posterior mean plug-in estimates.…”
Section: Additive Noise Synthesismentioning
confidence: 99%
See 1 more Smart Citation