2021
DOI: 10.2478/popets-2022-0023
|View full text |Cite
|
Sign up to set email alerts
|

Disparate Vulnerability to Membership Inference Attacks

Abstract: A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model’s training data or not. In this paper, we provide an in-depth study of the phenomenon of disparate vulnerability against MIAs: unequal success rate of MIAs against different population subgroups. We first establish necessary and sufficient conditions for MIAs to be prevented, both on average and for population subgroups, using a notion of distributional general… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 19 publications
1
12
0
Order By: Relevance
“…When State value "CA" is updated to another common value "NY" (New York, comprising 8% of the dataset), the success rates by two-model attack is 6%, but when changed to a rare value "SD" (South Dakota, 0.2%), the attack success rates rise to 65%. This is inline with prior studies showing that membership inference attacks, for example, have higher success rate on records with rare values [12].…”
Section: Updated Attribute Inferencesupporting
confidence: 91%
“…When State value "CA" is updated to another common value "NY" (New York, comprising 8% of the dataset), the success rates by two-model attack is 6%, but when changed to a rare value "SD" (South Dakota, 0.2%), the attack success rates rise to 65%. This is inline with prior studies showing that membership inference attacks, for example, have higher success rate on records with rare values [12].…”
Section: Updated Attribute Inferencesupporting
confidence: 91%
“…We are not able to answer these questions yet, although our experiments provide several intriguing observations and suggest possibilities to explore. It is not surprising that so little is understood about distribution inference-the research community has put extensive effort into studying membership inference attacks, and we are just beginning to be able to understand how and why membership inference risk varies [32]. There could also be trade-offs between robustness, fairness, interpretability, and vulnerability to distribution inference attacks.…”
Section: Discussionmentioning
confidence: 99%
“…Empirically, this is evaluated through membership inference attacks, where an attacker uses the model to determine whether a given data point was in the training set (Shokri et al, 2017). While Kulynych et al (2022) observed that DP reduces disparate vulnerability to such attacks, it has also been shown that DP can exacerbate unfairness (Bagdasaryan et al, 2019;Pujol et al, 2020). Conversely, Chang and Shokri (2020) showed that enforcing a fair model leads to more privacy leakage for the unprivileged group.…”
Section: Related Workmentioning
confidence: 99%