2019 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications/13th IEEE International 2019
DOI: 10.1109/trustcom/bigdatase.2019.00057
|View full text |Cite
|
Sign up to set email alerts
|

Poisoning Attack in Federated Learning using Generative Adversarial Nets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
58
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 148 publications
(58 citation statements)
references
References 12 publications
0
58
0
Order By: Relevance
“…Based on the reputation metric, the central server selects the clients with high reputation values. Zhang et al [75] propose a poisoning attack against FL systems based on GAN.…”
Section: Privacy and Securitymentioning
confidence: 99%
“…Based on the reputation metric, the central server selects the clients with high reputation values. Zhang et al [75] propose a poisoning attack against FL systems based on GAN.…”
Section: Privacy and Securitymentioning
confidence: 99%
“…However, they can be more compelling given that they have demonstrated their ability to generate artificial samples that are statistically representative of the training data [76]. For instance, authors in [78] demonstrate how GAN can be used to get training samples through inference and use these recovered samples to poison the training data. An adversary can act as a benign participant and stealthily trains a GAN to simulate prototypical samples of the other clients' training set, which does not belong to the attacker.…”
Section: ) Gan Reconstruction Attacksmentioning
confidence: 99%
“…The work conducted by Bhagoji et al (2019) shows that Byzantine‐resilient aggregation is weak to defense this attack. Further, J. Zhang et al (2019) attempted to use generative adversarial networks to generate training data for model poisoning attacks. Backdoor Attacks: As for backdoor attacks, the federated model can be backdoored by one or multiple malicious participants using model replacement (Bagdasaryan et al, 2020). It's possible for the backdoor to label certain tasks incorrectly while retaining the accuracy of the global model.…”
Section: Challenges In Federated Learningmentioning
confidence: 99%