2021
DOI: 10.1109/tifs.2021.3073804
|View full text |Cite
|
Sign up to set email alerts
|

Quantifying Membership Privacy via Information Leakage

Abstract: Machine learning models are known to memorize the unique properties of individual data points in a training set. This memorization capability can be exploited by several types of attacks to infer information about the training data, most notably, membership inference attacks. In this paper, we propose an approach based on information leakage for guaranteeing membership privacy. Specifically, we propose to use a conditional form of the notion of maximal leakage to quantify the information leaking about individu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(10 citation statements)
references
References 29 publications
0
10
0
Order By: Relevance
“…Example 15: Suppose we want to solve problem (21) with γ = log 2.5 and for a set Π (1) of distributions over an alphabet with four elements defined as Π (1) = {π ∈ ∆ (3) : π = (0.4 − 2δ, 0.3 + δ, 0.15 + 0.5δ, 0.15 + 0.5δ), 0 ≤ δ ≤ 0.1}. To solve the problem, first we need to construct the set Π…”
Section: General Setsmentioning
confidence: 99%
See 1 more Smart Citation
“…Example 15: Suppose we want to solve problem (21) with γ = log 2.5 and for a set Π (1) of distributions over an alphabet with four elements defined as Π (1) = {π ∈ ∆ (3) : π = (0.4 − 2δ, 0.3 + δ, 0.15 + 0.5δ, 0.15 + 0.5δ), 0 ≤ δ ≤ 0.1}. To solve the problem, first we need to construct the set Π…”
Section: General Setsmentioning
confidence: 99%
“…For example, no post-processing of the published data can undermine the initial privacy guarantee (i.e., maximal leakage satisfies a data processing inequality) [2]. In addition, maximal leakage can be employed as a tool for studying the privacy guarantees of practical algorithms [3].…”
Section: Introductionmentioning
confidence: 99%
“…Also, this metric requires inverting a Hessian which is computationally expensive for large models. Similarly, an information theoretic metric [37] can be used to compute an upper bound on the privacy risks of the PATE framework [35]. Even so, it cannot be used as a standalone metric for record-level membership privacy risk.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, the threat model of maximal leakage has fewer assumptions about the eavesdropper while [7], [8] assume that the eavesdropper has access to the distortion measure and even the target distortion level shared by the encoder and the decoder. Due to the above advantages, maximal leakage has been adopted in various settings as the secrecy/privacy measure, e.g., membership privacy [10], biometric template protection [11], and information retrieval [12].…”
Section: Introductionmentioning
confidence: 99%