2020
DOI: 10.48550/arxiv.2007.05084
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

Abstract: Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in the form of backdoors during training. The goal of a backdoor is to corrupt the performance of the trained model on specific sub-tasks (e.g., by classifying green cars as frogs). A range of FL backdoor attacks have been introduced in the literature, but also methods to defend against them, and it is currently an open question whether FL systems can be tailored to be robust against backdoors. In this work, we provide… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(25 citation statements)
references
References 56 publications
0
25
0
Order By: Relevance
“…• Edge Backdoor: the authors in [65] suggested a type of Backdoor attack that relied on using rare data pieces, i.e., edge samples of the dataset with adversarial labels, for local training so the resulting model will only misclassify those rare inputs. One benefit of this method is that gradient-based defense techniques are unlikely to detect this attack [65]. The attacker uses an edge case dataset D extracted from D. Then it performs standard local training aiming to maximize the accuracy.…”
Section: A Targeting Integrity and Availabilitymentioning
confidence: 99%
“…• Edge Backdoor: the authors in [65] suggested a type of Backdoor attack that relied on using rare data pieces, i.e., edge samples of the dataset with adversarial labels, for local training so the resulting model will only misclassify those rare inputs. One benefit of this method is that gradient-based defense techniques are unlikely to detect this attack [65]. The attacker uses an edge case dataset D extracted from D. Then it performs standard local training aiming to maximize the accuracy.…”
Section: A Targeting Integrity and Availabilitymentioning
confidence: 99%
“…Such adversarial goals can be achieved by Byzantine attacks [25], [28], [33], [38], [92], where some participants inside the collaborative learning system can conduct inappropriate behaviors, and propagate wrong information, leading to the failure of the learning system. On the other hand, backdoor attacks try to inject predefined malicious training samples, i.e., backdoors, into a victim model while maintaining the performance of the primary task [31], [93]- [102]. The backdoors would be activated if a input sample contains the injected triggers.…”
Section: A Integrity Threatsmentioning
confidence: 99%
“…Backdoor [97],Wang [31] Clean Federated Tolpegin [100] Unclean Federated DBA [101] Unclean Federated Sun [99] Unclean Federated Sun [53] Federated Bagdasaryan [30] Federated Fang [92] Federated Bhagoji [25] Federated entire training process, but only for one or a few participants. Based on the above assumption, Nguyen et al [97] indicated that the collaborative learning based IoT intrusion detection systems are vulnerable to backdoor attacks and proposed a data poisoning attack method.…”
Section: B Backdoor Attacks 1) Data Poisoningmentioning
confidence: 99%
See 2 more Smart Citations