2021
DOI: 10.48550/arxiv.2111.14683
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning

Abstract: Inserting a backdoor into the joint model in federated learning (FL) is a recent threat raising concerns. Existing studies mostly focus on developing effective countermeasures against this threat, assuming that backdoored local models, if any, somehow reveal themselves by anomalies in their gradients. However, this assumption needs to be elaborated by identifying specifically which gradients are more likely to indicate an anomaly to what extent under which conditions. This is an important issue given that neur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…In contrast to passive defense, active defense methods employ proactive strategies to address poisoning attacks in federated learning in advance. Their key focus lies in the ability to detect potential malicious models promptly and exclude them to ensure the security and reliability of the global model [14][15][16][17][18]. In this approach, the performance of the local model is detected to exclude the malicious poison model, which has become a new trend of poisoning attack defense in federated learning.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast to passive defense, active defense methods employ proactive strategies to address poisoning attacks in federated learning in advance. Their key focus lies in the ability to detect potential malicious models promptly and exclude them to ensure the security and reliability of the global model [14][15][16][17][18]. In this approach, the performance of the local model is detected to exclude the malicious poison model, which has become a new trend of poisoning attack defense in federated learning.…”
Section: Related Workmentioning
confidence: 99%
“…In this approach, the performance of the local model is detected to exclude the malicious poison model, which has become a new trend of poisoning attack defense in federated learning. For instance, Bilgin et al [14] directly calculated the similarity between model parameters of different participants, thereby assessing local model performance through the comparison of similarities and differences in local model parameters to detect potential malicious models. Liu et al [15] proposed CoLA, which constructs a contrastive self-supervised learning task to learn model representations.…”
Section: Related Workmentioning
confidence: 99%