Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/432
|View full text |Cite
|
Sign up to set email alerts
|

Against Membership Inference Attack: Pruning is All You Need

Abstract: The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices. To address the challenge, we envision that the weight pruning technique will help DNNs against MIA while reducing model storage and computational operation. In this work, we propose a pruning algorithm, and we show that the proposed algorithm can find a subnetwork that can prevent privacy leakage fro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…Recent studies show that private training data can be recovered through the deep learning model by gradients (Zhu et al, 2019;Geiping et al, 2020;Wang et al, 2021). For instance, recent work "Deep Leakage from Gradient" (DLG) (Zhu et al, 2019) showed the shared gradients could leak private training data in image classification.…”
Section: Introductionmentioning
confidence: 99%
“…Recent studies show that private training data can be recovered through the deep learning model by gradients (Zhu et al, 2019;Geiping et al, 2020;Wang et al, 2021). For instance, recent work "Deep Leakage from Gradient" (DLG) (Zhu et al, 2019) showed the shared gradients could leak private training data in image classification.…”
Section: Introductionmentioning
confidence: 99%
“…Jia et al [12] studied to fuzzy the confidence score vector to evade the membership classification of the attack model. Wang et al [17] reduced model storage and computational operation to defend membership inference attacks. Yin et al [18] set definitions for utility and privacy of target classifier, and formulated the design goal of the defence method as an optimization problem and solved it.…”
Section: Protection Mechanism Against Membership Inference Attackmentioning
confidence: 99%
“…Wang et al. [17] reduced model storage and computational operation to defend membership inference attacks. Yin et al.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For privacy preserving, DP is able to protect privacy with the guarantees of mathematical proof from the perspective of data reconstruction and membership inference. Unfortunately, it imposes a significant accuracy loss for protecting complicated models [29]. Homomorphic encryption brings too much computation overhead, making it unsuitable for models with relatively large numbers of parameters.…”
Section: Related Workmentioning
confidence: 99%