2021
DOI: 10.48550/arxiv.2111.09679
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Enhanced Membership Inference Attacks against Machine Learning Models

Abstract: How much does a given trained model leak about each individual data record in its training set? Membership inference attacks are used as an auditing tool to quantify the private information that a model leaks about the individual data points in its training set. The attacks are influenced by different uncertainties that an attacker has to resolve about training data, the training algorithm, and the underlying data distribution. Thus attack success rates, of many attacks in the literature, do not precisely capt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(32 citation statements)
references
References 41 publications
0
25
0
Order By: Relevance
“…As observed by [28,13], the optimal adversary performs a hypothesis test with respect to the posterior probabilities with the two hypotheses being:…”
Section: Optimal Membership Inference Via Hypothesis Testingmentioning
confidence: 99%
“…As observed by [28,13], the optimal adversary performs a hypothesis test with respect to the posterior probabilities with the two hypotheses being:…”
Section: Optimal Membership Inference Via Hypothesis Testingmentioning
confidence: 99%
“…The attack fails to achieve a TPR better than random chance at any FPR below 20%-it is therefore ineffective at confidently breaching the privacy of its members. Prior papers that do report ROC curves summarize them by the AUC (Area Under the Curve) [18,38,42,55,67,68]. However, as we can see from the curves above, the AUC is not an appropriate measure of an attack's efficacy, since the AUC averages over all false-positive rates, including high error rates that are irrelevant for a practical attack.…”
Section: B Evaluating Membership Inference Attacksmentioning
confidence: 99%
“…We directly turn this observation into a membership inference attack by computing per-example hardness scores [36,54,67,68]. By training models on random samples of data from the distribution D, we obtain empirical estimates of the distributions Qin and Qout for any example (x, y).…”
Section: Estimating the Likelihood-ratiomentioning
confidence: 99%
“…Recent works argue that a practically relevant threat model of membership inference is to confidently predict training set membership of a few samples rather than guessing well on average (Watson et al, 2021;Carlini et al, 2021a;Ye et al, 2021). Such membership inference attacks are evaluated using true and false positive rates.…”
Section: Privacy Attacksmentioning
confidence: 99%