2021
DOI: 10.48550/arxiv.2110.08324
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture

Xinyu Tang,
Saeed Mahloujifar,
Liwei Song
et al.

Abstract: Membership inference attacks are a key measure to evaluate privacy leakage in machine learning (ML) models. These attacks aim to distinguish training members from non-members by exploiting differential behavior of the models on member and non-member inputs. The goal of this work is to train ML models that have high membership privacy while largely preserving their utility; we therefore aim for an empirical membership privacy guarantee as opposed to the provable privacy guarantees provided by techniques like di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(16 citation statements)
references
References 28 publications
0
16
0
Order By: Relevance
“…After we submitted our work to PETS 2022 Issue 2, Tang et al[39] published a concurrent and independent work similar to ours in arXiv.…”
mentioning
confidence: 66%
“…After we submitted our work to PETS 2022 Issue 2, Tang et al[39] published a concurrent and independent work similar to ours in arXiv.…”
mentioning
confidence: 66%
“…Note added. Although this paper was submitted to a certain conference and currently under review, we recently became aware of a concurrent independent work related to KCD by Tang et al [72].…”
Section: Discussionmentioning
confidence: 99%
“…It restricts the private classifier's direct access to the private training dataset, thus significantly reduces the membership information leakage. Following DMP, there are two studies SELENA [12] and CKD/PCKD [13] which both split the original dataset into subsets, and leverage the subset models to distill the final public model. The advantage of these two DMP followers is to avoid the need for extra public data that may be hard to obtain in some applications.…”
Section: B Defense On Miasmentioning
confidence: 99%
“…However, it is not trivial to apply them directly on image translation tasks. As the sample sizes of datasets in image translation are generally smaller than those in classification, splitting the data and using subsets in training as in [12], [13] brings more risks of overfitting. Moreover, the major task of the private teacher model in [11] is to select the data by the entropy of the prediction, but the output of an image translation task does not have such entropy information, which indicates [11] cannot be fully utilized on image translation.…”
Section: Introductionmentioning
confidence: 99%