2023
DOI: 10.3390/s23031052
|View full text |Cite
|
Sign up to set email alerts
|

Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning

Abstract: Federated learning has a distributed collaborative training mode, widely used in IoT scenarios of edge computing intelligent services. However, federated learning is vulnerable to malicious attacks, mainly backdoor attacks. Once an edge node implements a backdoor attack, the embedded backdoor mode will rapidly expand to all relevant edge nodes, which poses a considerable challenge to security-sensitive edge computing intelligent services. In the traditional edge collaborative backdoor defense method, only the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…FL is vulnerable to clients with malicious intent that may manipulate their local updates before sending it to the server, so-called poisoning attacks. Such attacks are multifaceted and may be untargeted [17], [18], i.e., aim to deteriorate the global model performance, or targeted, i.e., alter the behavior of the global model on specific data samples [12], [19], [20]. Poisoning attacks may be divided into data poisoning [21], [22], [23] and model poisoning [18], [24], [25], [26] where the former alters the underlying dataset and the latter directly manipulates the model weights.…”
Section: B Poisoning Attacks In Federated Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…FL is vulnerable to clients with malicious intent that may manipulate their local updates before sending it to the server, so-called poisoning attacks. Such attacks are multifaceted and may be untargeted [17], [18], i.e., aim to deteriorate the global model performance, or targeted, i.e., alter the behavior of the global model on specific data samples [12], [19], [20]. Poisoning attacks may be divided into data poisoning [21], [22], [23] and model poisoning [18], [24], [25], [26] where the former alters the underlying dataset and the latter directly manipulates the model weights.…”
Section: B Poisoning Attacks In Federated Learningmentioning
confidence: 99%
“…From the adversary perspective, poisoning attacks on FL are commonly tailored towards classification problems [12], [13] with only a small number targeting regression problems [14], [15]. However, regression tasks are common in autonomous driving, e.g., vehicle speed prediction, distance estimation, time-to-collision prediction, and vehicle trajectory prediction.…”
Section: Introductionmentioning
confidence: 99%