2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS) 2019
DOI: 10.1109/icpads47876.2019.00042
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Distributed Poisoning Attack in Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
73
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 179 publications
(73 citation statements)
references
References 5 publications
0
73
0
Order By: Relevance
“…One case is a type of inference attack (Krumm, 2007), which is designed to examine the privacy protection effectiveness of our framework, especially for location privacy. The other is a type of distributed poisoning attack (Cao, Chang, Lin, Liu, & Sun, 2019), designed to examine the robustness of the federated learning model, that is, how stable the recommendation accuracy is when under poisoning attack.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…One case is a type of inference attack (Krumm, 2007), which is designed to examine the privacy protection effectiveness of our framework, especially for location privacy. The other is a type of distributed poisoning attack (Cao, Chang, Lin, Liu, & Sun, 2019), designed to examine the robustness of the federated learning model, that is, how stable the recommendation accuracy is when under poisoning attack.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…They use a variant of Multi-Party Computation (MPC) to improve usability under existence of malicious clients. In [31], Cao et al analyze the effect of poisoned data and the number of attackers on the performance of distributed poisoning attacks, and propose a scheme to drop poisoned local models during the training of global model. Gao et al [32] introduce a privacy-preserving framework for heterogeneous federated transfer learning, which uses an end-to-end secure multi-party learning approach.…”
Section: B Privacy and Security Of Flmentioning
confidence: 99%
“…Poisoning attacks can occur during the training period and are primarily aimed at availability or integrity of the data. Generally, there are two main approaches to generate poisoned attacks, namely: label-flipping [22] and backdoor [23]. In this paper, we mainly focus on labelflipping where an adversary modifies a small number of examples/training data and maintains the characteristics of the data unchanged to degrade the performance of the model.…”
Section: A Learning Modelmentioning
confidence: 99%
“…Data poisoning: According to work in [28,22], when launching targeted poisoning attacks (i.e., label flipping attack) on a handwritten digits classifier, the easiest and hardest source and target label pairs are (6,2) and (8,4), respectively. Accordingly, we study both targeted labels.…”
Section: A Experimental Setupmentioning
confidence: 99%