Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security 2016
DOI: 10.1145/2976749.2978308
|View full text |Cite
|
Sign up to set email alerts
|

Differential Privacy as a Mutual Information Constraint

Abstract: Differential privacy is a precise mathematical constraint meant to ensure privacy of individual pieces of information in a database even while queries are being answered about the aggregate. Intuitively, one must come to terms with what differential privacy does and does not guarantee. For example, the definition prevents a strong adversary who knows all but one entry in the database from further inferring about the last one. This strong adversary assumption can be overlooked, resulting in misinterpretation of… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
167
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 155 publications
(167 citation statements)
references
References 29 publications
0
167
0
Order By: Relevance
“…Since the seminal work of Dwork [1] on differential privacy (DP), a lot of its variants have been studied to provide different types of privacy guarantees [21]; e.g., d-privacy [13], f -divergence privacy [20], [8], mutualinformation DP [9], concentrated DP [22], Rényi DP [10], Pufferfish privacy [23], Bayesian DP [24], local DP [2], personalized DP [25], and utility-optimized local DP [26]. All of these are intended to protect single input values instead of input distributions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Since the seminal work of Dwork [1] on differential privacy (DP), a lot of its variants have been studied to provide different types of privacy guarantees [21]; e.g., d-privacy [13], f -divergence privacy [20], [8], mutualinformation DP [9], concentrated DP [22], Rényi DP [10], Pufferfish privacy [23], Bayesian DP [24], local DP [2], personalized DP [25], and utility-optimized local DP [26]. All of these are intended to protect single input values instead of input distributions.…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, we relax the notion of DistP by generalizing it to an arbitrary divergence. The basic idea is similar to point privacy notions that relax DP and improve utility by relying on some divergence (e.g., total variation privacy [8], Kullback-Leibler divergence privacy [8], [9], and Rényi differential privacy [10]). We define the notion of divergence distribution privacy by replacing the DP-style with an arbitrary divergence D. This relaxation allows us to formalize "on-average" DistP, and to explore privacy notions *This work was partially supported by JSPS KAKENHI Grant JP17K12667, JP19H04113, and Inria LOGIS project.…”
Section: Introductionmentioning
confidence: 99%
“…Proposition 1 (Theorem 1 in [4]). ε-MI-DP is stronger than (ε, δ )-DP in the sense that for all ε > 0 if a mechanism is ε-MI-DP, there exists ε ′ , δ ′ such that the mechanism satisfies (ε ′ , δ ′ )-DP.…”
Section: Formulation and Backgroundmentioning
confidence: 99%
“…Proposition 2 (See Lemma 2 in [4]). If a mechanism is ε-MI-DP then it also satisfies (0, 2 log(e) ε)-DP.…”
Section: Formulation and Backgroundmentioning
confidence: 99%
See 1 more Smart Citation