2021
DOI: 10.48550/arxiv.2112.01405
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FedRAD: Federated Robust Adaptive Distillation

Abstract: The robustness of federated learning (FL) is vital for the distributed training of an accurate global model that is shared among large number of clients. The collaborative learning framework by typically aggregating model updates is vulnerable to model poisoning attacks from adversarial clients. Since the shared information between the global server and participants are only limited to model parameters, it is challenging to detect bad model updates. Moreover, real-world datasets are usually heterogeneous and n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…FedDF is regarded as a backdoor defense as recent studies (Li et al 2021) have shown that distillation is effective in removing backdoor from general (non-FL) backdoored models. FedRAD (Sturluson et al 2021) extends FedDF by giving each client a median-based score s i , which measures the frequency that the client output logits become the median for class predictions. FedRAD normalizes the score to a weight s i / K i=1 (s i ) and use the weight for model aggregation:…”
Section: Attacking Model-refinement Defensesmentioning
confidence: 99%
“…FedDF is regarded as a backdoor defense as recent studies (Li et al 2021) have shown that distillation is effective in removing backdoor from general (non-FL) backdoored models. FedRAD (Sturluson et al 2021) extends FedDF by giving each client a median-based score s i , which measures the frequency that the client output logits become the median for class predictions. FedRAD normalizes the score to a weight s i / K i=1 (s i ) and use the weight for model aggregation:…”
Section: Attacking Model-refinement Defensesmentioning
confidence: 99%
“…The distillation process of [140] assumes that clean data is available to the defender. This requirement is also inherited by FedRAD [141], a knowledge distillation based defense for FL. FedRAD needs to prepare synthetic data [142] on the central server for model evaluation.…”
Section: Model Cleansingmentioning
confidence: 99%
“…FedRAD (Sturluson et al 2021) extends FedDF by giving each client a median-based score s i , which measures the frequency that the client output logits become the median for class predictions. FedRAD normalizes the score to a weight s i / K i=1 (s i ) and use the weight for model aggregation.…”
Section: Attacking Model-refinement Defensesmentioning
confidence: 99%
“…Based on the different defense mechanisms they adopt, the federated backdoor defenses can be classified into three major categories: model-refinement, robust-aggregation, and certifiedrobustness. Model-refinement defenses attempt to refine the global model to erase the possible backdoor, through methods such as fine-tuning (Wu et al 2020) or distillation (Lin et al 2020;Sturluson et al 2021). Intuitively, distillation or pruning-based FL can also be more robust to current federated backdoor attacks as recent studies on backdoor defenses (Li et al 2021;Liu, Dolan-Gavitt, and Garg 2018) have shown that such methods are effective in removing backdoor from general (non-FL) backdoored models.…”
Section: Introductionmentioning
confidence: 99%