2019 IEEE Global Communications Conference (GLOBECOM) 2019
DOI: 10.1109/globecom38437.2019.9014134
|View full text |Cite
|
Sign up to set email alerts
|

Differentially Private Functional Mechanism for Generative Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 6 publications
0
7
0
Order By: Relevance
“…For the privacy-preserving data analysis, the standard privacy metric, Differential privacy (DP) [9,11], is proposed to measure the privacy risk of each data sample in the dataset, and has already been adopted in many machine learning domains [4,8,18,20,24]. Basically, under DP framework, privacy protection is guaranteed by limiting the difference of the distribution of the output regardless of the value change of any one sample in the dataset.…”
Section: Differential Privacymentioning
confidence: 99%
“…For the privacy-preserving data analysis, the standard privacy metric, Differential privacy (DP) [9,11], is proposed to measure the privacy risk of each data sample in the dataset, and has already been adopted in many machine learning domains [4,8,18,20,24]. Basically, under DP framework, privacy protection is guaranteed by limiting the difference of the distribution of the output regardless of the value change of any one sample in the dataset.…”
Section: Differential Privacymentioning
confidence: 99%
“…In objective function perturbation, existing work injects Laplace noise into the coefficients to construct differentially private loss function in GAN training. Zhang et al [ 76 ] proposed a new privacy protection GAN, which perturbs the coefficients of the objective function by injecting Laplace noise into the latent space based on the function mechanism to ensure the differential privacy of the training data, and it is reliable to generate high-quality real synthetic data samples without divulging the sensitive information in the training dataset.…”
Section: Differential Privacy Synthetic Data Generationmentioning
confidence: 99%
“…a well-trained generative adversarial network (GAN) [18], captures the underlying distribution of the real data, which means there is nothing stopping it from accidentally producing a doppelganger of a sensitive record (or a close enough sample), and simply sampling such a model could reveal much about both individual records and specific sensitive features of the training dataset [56,57]. There are a number of linkage attacks specifically developed for GANs [35,[58][59][60], as well as some defences proposed for all data types (images, time-series, structured data) [56,[61][62][63].…”
Section: Leakage For Different Tasksmentioning
confidence: 99%
“…The more traditional applications of DP, outlined in [136] are the DP online learning, [148][149][150] and DP empirical risk minimization, [151][152][153][154][155][156][157]. However, the range of learning tasks that DP was applied to has widened and now includes nearly anything from the federated ML setting [158] to differentially private recurrent language models [159], and even differentially private generative adversarial networks [61,63], specific DP-GAN applications for generating timeseries [62,111], and tabular mixed feature datasets [160].…”
Section: Differential Privacy Surveysmentioning
confidence: 99%