2022
DOI: 10.1109/jiot.2021.3128646
|View full text |Cite
|
Sign up to set email alerts
|

Data Poisoning Attacks on Federated Machine Learning

Abstract: Federated Learning (FL) exposes vulnerabilities to targeted poisoning attacks that aim to cause misclassification specifically from the source class to the target class. However, using well-established defense frameworks, the poisoning impact of these attacks can be greatly mitigated. We introduce a generalized pre-training stage approach to Boost Targeted Poisoning Attacks against FL, called BoTPA. Its design rationale is to leverage the model update contributions of all data points, including ones outside of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
50
0
1

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 141 publications
(51 citation statements)
references
References 26 publications
0
50
0
1
Order By: Relevance
“…For example, in a face recognition system that allows access to a house, to identify five specific people from the input set, who originally did not have access (negative label as origin label) as people who can access (positive label as a target label). Some works which implement this kind of the attack are [18] where the authors analyse the impact of different attacks scenarios, [40] where the authors prove that you can really backdoor FL even using existing defences and [41] where the aim is to present the data-poisoning attacks.…”
Section: Taxonomy According To the Objectivementioning
confidence: 99%
See 1 more Smart Citation
“…For example, in a face recognition system that allows access to a house, to identify five specific people from the input set, who originally did not have access (negative label as origin label) as people who can access (positive label as a target label). Some works which implement this kind of the attack are [18] where the authors analyse the impact of different attacks scenarios, [40] where the authors prove that you can really backdoor FL even using existing defences and [41] where the aim is to present the data-poisoning attacks.…”
Section: Taxonomy According To the Objectivementioning
confidence: 99%
“…However, it is also possible that the goal of the attackers is not to impair of the local models, but only a specific subset of them. In Sun et al [41], they define a set of target nodes as those nodes (clients or server) to be compromised by the attack. According to this definition, we may differentiate between the following three types of data-poisoning attacks depending on the access level the attackers have to the target nodes:…”
Section: Information Leakagementioning
confidence: 99%
“…They find that poisons injected late in the training process are significantly more effective than those injected early. Other proposals adopt a bi-level optimization approach for poisoning multi-task FL [56] and GAN-generated poisons [57]. Model Poisoning.…”
Section: A Adversarial Attacks To MLmentioning
confidence: 99%
“…The privacy benefit has motivated the adoption of FL in a variety of sensitive applications, including Google GBoard, healthcare services, and self-driving cars. However, vanilla FL has been demonstrated to be vulnerable to a range of attacks (Bagdasaryan et al, 2020;Bhagoji et al, 2019;Fang et al, 2020;Nasr et al, 2019;Sun et al, 2021;Luo et al, 2021). There are two mainstream vulnerabilities in FL, namely Byzantine robustness and privacy.…”
Section: Introductionmentioning
confidence: 99%