2019 IEEE Symposium on Security and Privacy (SP) 2019
DOI: 10.1109/sp.2019.00065
|View full text |Cite
|
Sign up to set email alerts
|

Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning

Abstract: Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming differe… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
816
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 1,133 publications
(933 citation statements)
references
References 25 publications
5
816
0
2
Order By: Relevance
“…We evaluate MemGuard and compare it with state-of-the-art defenses [1,42,56,58] on three real-world datasets. Our empirical results show that MemGuard can effectively defend against state-of-the-art black-box membership inference attacks [43,56]. In particular, as MemGuard is allowed to add larger noise (we measure the magnitude of the noise using its L 1 -norm), the inference accuracies of all evaluated membership inference attacks become smaller.…”
Section: Introductionmentioning
confidence: 93%
See 1 more Smart Citation
“…We evaluate MemGuard and compare it with state-of-the-art defenses [1,42,56,58] on three real-world datasets. Our empirical results show that MemGuard can effectively defend against state-of-the-art black-box membership inference attacks [43,56]. In particular, as MemGuard is allowed to add larger noise (we measure the magnitude of the noise using its L 1 -norm), the inference accuracies of all evaluated membership inference attacks become smaller.…”
Section: Introductionmentioning
confidence: 93%
“…More recently, Nasr et al [43] proposed membership inference attacks against white-box ML models. For a data sample, they calculate the corresponding gradients over the white-box target classifier's parameters and use these gradients as the data sample's feature for membership inference.…”
Section: Related Work 21 Membership Inferencementioning
confidence: 99%
“…However, [39], [41], [42] failed to present performance analysis of the state-of-the-art edge caching schemes. On the other hand, [43] compared the performance of the FL and centralized schemes. However, the analysis was limited to the privacy performance of the schemes.…”
Section: B Literature Reviewmentioning
confidence: 99%
“…Membership inference. Membership inference against classification models has been studied in [22,25,27,28], and later studied for generative models and language models [15,31] as well as in collaborative learning setting [23,24]. The attack in [28] focuses black-box models, exploiting the differences in the models' outputs on training and non-training inputs.…”
Section: Related Workmentioning
confidence: 99%