2020
DOI: 10.48550/arxiv.2008.13578
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Against Membership Inference Attack: Pruning is All You Need

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(10 citation statements)
references
References 26 publications
0
10
0
Order By: Relevance
“…This is similar to confidence-based attacks as in [20,21]. [22][23][24][25][26][27] all study how various types of regularization can defend against MI attacks. Furthermore, MI is a special case of differential privacy (DP), and we refer the reader to [28] and [29] for a survey of DP results and its connections to deep learning, respectively, as well as to [30] for a precise connection between DP and MI.…”
Section: Related Workmentioning
confidence: 88%
See 1 more Smart Citation
“…This is similar to confidence-based attacks as in [20,21]. [22][23][24][25][26][27] all study how various types of regularization can defend against MI attacks. Furthermore, MI is a special case of differential privacy (DP), and we refer the reader to [28] and [29] for a survey of DP results and its connections to deep learning, respectively, as well as to [30] for a precise connection between DP and MI.…”
Section: Related Workmentioning
confidence: 88%
“…In this paper, we are interested in understanding both empirically and theoretically how overparameterization affects MI in classification. [25,34] show that pruning a network can improve MI robustness. [27] show empirically that MI tends to be easier on more challenging learning tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Jia et al [20] studied to fuzzy the confidence score vector to evade the membership classification of the attack model. Wang et al [36] reduced model storage and computational operation to defend membership inference attacks. Yin et al [39] set definitions for utility and privacy of target classifier, and formulated the design goal of the defense method as an optimization problem and solved it.…”
Section: Protection Mechanism Against Membership Inference Attackmentioning
confidence: 99%
“…As a result, the prediction scores or the parameters' gradients in the target model are distinguishable for members and non-members of the training data. There are several defenses [31,27,29,33,26,32,36,39] have been proposed and most of them try to reducing overfitting by various regularization technique, such as L2 regularization [31], min-max game based adversarial regularization [26,39], and dropout [29]. In addition, Song et al [32] showed that early stopping outperformed other overfitting-prevention counter-measures against membership inference attacks to classifiers.…”
Section: Introductionmentioning
confidence: 99%
“…Existing empirical privacy defenses can be categorized by their method of protecting the training data (e.g., regularization [26,30], confidence-vector masking [20,49], knowledge distillation [44]). Alternatively, one can group defenses by whether they use only the private training data [44] or require access to reference data [20,26,30,40,48,49], defined as additional data from the same (or a similar) underlying distribution [30]. The two most prominent differentially private defenses can also be distinguished according to this distinction, where PATE [33] requires access to (unlabeled) reference data but DP-SGD [1] does not.…”
Section: Introductionmentioning
confidence: 99%