2019
DOI: 10.48550/arxiv.1909.11835
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GAMIN: An Adversarial Approach to Black-Box Model Inversion

Abstract: Recent works have demonstrated that machine learning models are vulnerable to model inversion attacks, which lead to the exposure of sensitive information contained in their training dataset. While some model inversion attacks have been developed in the past in the black-box attack setting, in which the adversary does not have direct access to the structure of the model, few of these have been conducted so far against complex models such as deep neural networks. In this paper, we introduce GAMIN (for Generativ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(28 citation statements)
references
References 35 publications
0
28
0
Order By: Relevance
“…While for the black-box scenario, there are no impressive works before the birth of GAN. In [3], a model inversion attack framework was built under the black-box setting. Given a target model 𝑓 and a label 𝑦 𝑡 , an attacker aims at characterizing data 𝑥 𝑡 belonging to 𝑦 𝑡 .…”
Section: Attacks On Preimagementioning
confidence: 99%
“…While for the black-box scenario, there are no impressive works before the birth of GAN. In [3], a model inversion attack framework was built under the black-box setting. Given a target model 𝑓 and a label 𝑦 𝑡 , an attacker aims at characterizing data 𝑥 𝑡 belonging to 𝑦 𝑡 .…”
Section: Attacks On Preimagementioning
confidence: 99%
“…This scenario has been left as an open problem. Aïvodji et al [21] have introduced a generative adversarial model inversion under the black-box scenario, and they have achieved considerable results even against a deep model. Although their approach does not have any information regarding the model system, it is based on try and error based on assumptions of target model structure as well for the data structure.…”
Section: Related Workmentioning
confidence: 99%
“…Table 1 gives a brief review to the above mentieond related works. [15] MIA in privacy violation in pharmacogenetics by Fredrikson et al [18] Initial introduciton of MIA by Fredrikson et al [19] Stealing machine learning models via prediction apis by Tramèr et al [20] Discussion on black-box scenario [21] Introduced a generative adversarial model inversion under the black-box scenario by Aïvodji et al [23,24] Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes by Hidano et al [25] Using generative models in MIA on deep neural networks by Zhang et al [26] Generating and implementing optimum seed image by pre-trained GAN for initialization of MIA process on DL based recognition system by Khosravy et al…”
Section: Related Workmentioning
confidence: 99%
“…Unfortunately, since ML models tend to memorize information about training data, even when stored and processed securely, privacy information can still be exposed through the access to the models [21]. Indeed, the prior study of privacy attacks has demonstrated the possibility of exposing training data at different granularities, ranging from "coarse-grained" information such as determining whether a certain point participate in training [11,15,17,22] or whether a training dataset satisfies certain properties [10,16], to more "fine-grained" information such as reconstructing the raw data [2,4,8,25].…”
Section: Introductionmentioning
confidence: 99%
“…The synthesis is implemented as a gradient ascent algorithm. By contrast, existing blackbox attacks [2,20] are based on training an attack network that predicts the sensitive feature from the input confidence scores. Despite the exclusive focus on these two threat models, in practice, ML models are often packed into a blackbox that only produces hard-labels when being queried.…”
Section: Introductionmentioning
confidence: 99%