2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01587
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge-Enriched Distributional Model Inversion Attacks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
56
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 54 publications
(56 citation statements)
references
References 16 publications
0
56
0
Order By: Relevance
“…Comparison with Previous MIA Approaches. We start by comparing our Plug & Play Attacks (PPA) against the most recent work on MIAs by Zhang et al [43] (GMA), Chen et al [3] (KED), and Wang et al [39] (VMI). We carefully selected the hyperparameters by testing various configurations and hyperparameters of each attack.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Comparison with Previous MIA Approaches. We start by comparing our Plug & Play Attacks (PPA) against the most recent work on MIAs by Zhang et al [43] (GMA), Chen et al [3] (KED), and Wang et al [39] (VMI). We carefully selected the hyperparameters by testing various configurations and hyperparameters of each attack.…”
Section: Methodsmentioning
confidence: 99%
“…Chen et al [3] built upon this approach and improved the GAN's training process by including soft-labels produced by the target model. To recover the distribution for a target class rather than a single data point, the authors proposed to learn the mean and standard deviation of the latent distribution for each target class modeled by the generator.…”
Section: Model Inversion In Deep Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…A growing body of work has successfully demonstrated that it is possible to extract meaningful, potentially privacyviolating, information from DNNs. Novel attacks such as property-inference [4], model inversion [26], or membership inference [79] have shown that it is possible to extract additional properties from a model and correlate them to a specific subset of data contributors [4,29], reconstruct training data by simply querying the DNN [15,26,27,36,89], and determine the presence of a given input data in the training set used for a DNN [76,77,79], emphasising the need for privacy-preserving ML (PPML) mechanisms.…”
Section: Privacy Concerns With Deep Learningmentioning
confidence: 99%