2016 IEEE 29th Computer Security Foundations Symposium (CSF) 2016
DOI: 10.1109/csf.2016.32
|View full text |Cite
|
Sign up to set email alerts
|

A Methodology for Formalizing Model-Inversion Attacks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
109
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
3
3

Relationship

2
7

Authors

Journals

citations
Cited by 143 publications
(110 citation statements)
references
References 15 publications
0
109
0
1
Order By: Relevance
“…Machine learning has emerged as an important technology, enabling a wide range of applications including computer vision, machine translation, health analytics, and advertising, among others. The fact that many compelling applications of this technology involve the collection and processing of sensitive personal data has given rise to concerns about privacy [1,2,3,4,5,6,7,8,9]. In particular, when machine learning algorithms are applied to private training data, the resulting models might unwittingly leak information about that data through either their behavior (i.e., black-box attack) or the details of their structure (i.e., white-box attack).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Machine learning has emerged as an important technology, enabling a wide range of applications including computer vision, machine translation, health analytics, and advertising, among others. The fact that many compelling applications of this technology involve the collection and processing of sensitive personal data has given rise to concerns about privacy [1,2,3,4,5,6,7,8,9]. In particular, when machine learning algorithms are applied to private training data, the resulting models might unwittingly leak information about that data through either their behavior (i.e., black-box attack) or the details of their structure (i.e., white-box attack).…”
Section: Introductionmentioning
confidence: 99%
“…A second factor identified as relevant to privacy risk is influence [5], a quantity that arises often in the study of Boolean functions [20]. Influence measures the extent to which a particular input to a function is able to cause changes to its output.…”
Section: Introductionmentioning
confidence: 99%
“…In this attack, the attacker attempts to mimic the data samples to deceive the ML algorithms to classifying the original samples with different labels incorrectly from the impersonated ones [272,274,275]. The last possible attack is inversion, which exploits the application program interfaces presented to the users by the current ML platform to collect roughly the necessary information about the pre-trained ML models [271,276]. Subsequently, this extracted information is used to perform reverse engineering to obtain the sensitive data of users.…”
Section: ) Security Of ML and Dl Methodsmentioning
confidence: 99%
“…Data pollution attacks have been studied long before other attacks became relevant [27]. In a model inversion attack, the attacker learns information about data used to train the machine learning model [28], [29], [30]. A similar but stronger setting is the membership inference attack where the attacker identifies whether an individual's information was present in the training data [31].…”
Section: Commentsmentioning
confidence: 99%