2023
DOI: 10.1609/aaai.v37i7.26083
|View full text |Cite
|
Sign up to set email alerts
|

Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning

Abstract: Are Federated Learning (FL) systems free from backdoor poisoning with the arsenal of various defense strategies deployed? This is an intriguing problem with significant practical implications regarding the utility of FL services. Despite the recent flourish of poisoning-resilient FL methods, our study shows that carefully tuning the collusion between malicious participants can minimize the trigger-induced bias of the poisoned local model from the poison-free one, which plays the key role in delivering stealthy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…Despite its innovative approach, FL's security framework invites further scrutiny, especially within sectors managing exceptionally sensitive data, such as the healthcare industry [72][73][74]. Vulnerabilities in FL, including susceptibility to model poisoning, data heterogeneity, and model inversion attacks, possess the potential to undermine the efficacy of the Global Model [72][73][74][75][76].…”
Section: Privacy and Security In Federated Learning Systems: State-of...mentioning
confidence: 99%
See 1 more Smart Citation
“…Despite its innovative approach, FL's security framework invites further scrutiny, especially within sectors managing exceptionally sensitive data, such as the healthcare industry [72][73][74]. Vulnerabilities in FL, including susceptibility to model poisoning, data heterogeneity, and model inversion attacks, possess the potential to undermine the efficacy of the Global Model [72][73][74][75][76].…”
Section: Privacy and Security In Federated Learning Systems: State-of...mentioning
confidence: 99%
“…Moreover, the meticulous orchestration of collusion among malicious participants can subtly reduce the bias triggered in the poisoned local model-minimising disparities from the poison-free model. This subtlety becomes critical in facilitating stealthy backdoor attacks and eluding a myriad of top-tier defence strategies currently available in FL [76]. Thus, a void exists, signalling an exigent need for additional research aimed at devising potent and encompassing defensive mechanisms to bolster the security infrastructure of FL systems.…”
Section: Privacy and Security In Federated Learning Systems: State-of...mentioning
confidence: 99%
“…However, FL is seriously threatened by backdoor attacks [18], [19]. A backdoor attack refers to a situation in which attackers inject adversarial triggers (i.e., backdoor) into the trained model, enabling the model to fulfill a specific task preferred by the attacker (referred to as the backdoor task) while still satisfying the task required by FL (referred to as the main task) [20], [21]. For instance, in the FL that coordinates banks to train a model to predict the loan status (i.e., the main task), a malicious bank (i.e., attacker) may specify the value of some attribute names of its local data (for example, number of mortgage accounts equals 10) as malicious backdoor triggers, and set the corresponding label (e.g., the predicted loan status) as "Charged Off".…”
Section: Introductionmentioning
confidence: 99%
“…Backdoor attacks have also posed a substantial threat in other scenarios, such as digit image classification and news recommendation [22], [23]. What is worse, the greater autonomy owned by clients in FL facilitates the execution of backdoor attacks and positions them as one of the most prevalent security threats for FL [21].…”
Section: Introductionmentioning
confidence: 99%