Proceedings 2019 Network and Distributed System Security Symposium 2019
DOI: 10.14722/ndss.2019.23119
|View full text |Cite
|
Sign up to set email alerts
|

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

Abstract: Machine learning (ML) has become a core component of many real-world applications and training data is a key factor that drives current progress. This huge success has led Internet companies to deploy machine learning as a service (MLaaS). Recently, the first membership inference attack has shown that extraction of information on the training set is possible in such MLaaS settings, which has severe security and privacy implications.However, the early demonstrations of the feasibility of such attacks have many … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

9
761
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 526 publications
(771 citation statements)
references
References 28 publications
9
761
1
Order By: Relevance
“…Membership inference attacks [24], [25] attempt to determine if a record obtained by an adversary was part of the original training data of the model. Whilst this attack does not compromise the security of the model, it breaches the privacy of the individual records.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Membership inference attacks [24], [25] attempt to determine if a record obtained by an adversary was part of the original training data of the model. Whilst this attack does not compromise the security of the model, it breaches the privacy of the individual records.…”
Section: Related Workmentioning
confidence: 99%
“…These attacks create a shadow model [24] to mimic the behavior of the target model. Salem et al [25] construct a shadow model using only positive class samples and negative noise generated via uniformly random feature vectors. However it is hypothesized that these random samples belong to non-members, i.e., the negative class [25, §V.B].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The attack technique proposed in [17], however, requires computational power and involves the training of multiple machine learning models. The techniques proposed in [18] are different still in that they require the attacker to develop effective threshold values. Figure 1 gives a workflow sketch of membership inference attack generation algorithm.…”
Section: B Attack Generationmentioning
confidence: 99%
“…The membership inference attack against machine learning models was first presented in [17] where the authors proposed the shadow model attack. [14] characterized attack vulnerability with respect to different model types and datasets and introduced the vulnerability of loosely federated systems while the authors in [18] relax adversarial assumptions and generate a model and data independent attacker. Hayes et.…”
Section: Related Workmentioning
confidence: 99%