2024
DOI: 10.1109/tai.2023.3280155
|View full text |Cite
|
Sign up to set email alerts
|

Attacking-Distance-Aware Attack: Semi-targeted Model Poisoning on Federated Learning

Abstract: Existing model poisoning attacks on federated learning (FL) assume that an adversary has access to the full data distribution. In reality, an adversary usually has limited prior knowledge about clients' data. A poorly chosen target class renders an attack less effective. This work considers a semitargeted situation where the source class is predetermined but the target class is not. The goal is to cause the misclassification of the global classifier on data from the source class. Approaches such as label flipp… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
references
References 32 publications
0
0
0
Order By: Relevance