2020 International Conference on Advanced Computing and Applications (ACOMP) 2020
DOI: 10.1109/acomp50827.2020.00028
|View full text |Cite
|
Sign up to set email alerts
|

Investigating Local Differential Privacy and Generative Adversarial Network in Collecting Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…One of the clients who participated in the system was the adversary (Ha et al , 2019). The attacker performs two learnings: the first, they performed the main task, which took part in the federated training process to collaboratively train a model; the second, they performed the adversarial task, which made an inference attack based on the GAN model (Ha and Dang, 2020). In this study, the adversarial task looked closely at the ith participant as shown in Figure 11.…”
Section: Methodsmentioning
confidence: 99%
“…One of the clients who participated in the system was the adversary (Ha et al , 2019). The attacker performs two learnings: the first, they performed the main task, which took part in the federated training process to collaboratively train a model; the second, they performed the adversarial task, which made an inference attack based on the GAN model (Ha and Dang, 2020). In this study, the adversarial task looked closely at the ith participant as shown in Figure 11.…”
Section: Methodsmentioning
confidence: 99%
“…Liu et al [ 60 ] proposed a GAN model for privacy protection, which achieves differential privacy by adding carefully designed noise to the clipping gradient in the process of model learning, uses the moment accountant strategy to improve the stability and compatibility of the model by controlling the loss of privacy, and generates high-quality synthetic data while retaining the required available data under a reasonable privacy budget. Ha and Dang [ 61 ] proposed a local differential privacy GAN model for noise data generation, which establishes a generative model by clipping the gradient in the model and adding Gaussian noise to the gradient to ensure the differential privacy. Chen et al [ 62 ] proposed gradient-sanitized WGAN, which allows the publication of sanitized sensitive data under strict privacy guarantee and can more accurately distort gradient information so as to train deeper models and generate more information samples.…”
Section: Differential Privacy Synthetic Data Generationmentioning
confidence: 99%