2022
DOI: 10.2478/popets-2022-0050
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Cross-Distillation for Membership Privacy

Abstract: A membership inference attack (MIA) poses privacy risks for the training data of a machine learning model. With an MIA, an attacker guesses if the target data are a member of the training dataset. The state-of-the-art defense against MIAs, distillation for membership privacy (DMP), requires not only private data for protection but a large amount of unlabeled public data. However, in certain privacy-sensitive domains, such as medicine and finance, the availability of public data is not guaranteed. Moreover, a t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(12 citation statements)
references
References 23 publications
0
0
0
Order By: Relevance
“…privacy, model utility, and computational costs, compared to stateof-the-art empirical defenses, such as AdvReg [23], MemGuard [16], KCD [8], and SELENA [38].…”
Section: Low Model Utilitymentioning
confidence: 99%
See 4 more Smart Citations
“…privacy, model utility, and computational costs, compared to stateof-the-art empirical defenses, such as AdvReg [23], MemGuard [16], KCD [8], and SELENA [38].…”
Section: Low Model Utilitymentioning
confidence: 99%
“…Our goal is to mitigate practical black-box MIAs, maintain high model utility, and have low computational costs. First, similar to KCD [8], SEDMA splits a training dataset into several subsets and trains multiple ML models (called sub-models) on each subset. Second, the trained sub-models are aggregated into several pairs (called model aggregation).…”
Section: Low Model Utilitymentioning
confidence: 99%
See 3 more Smart Citations