2022
DOI: 10.1093/comjnl/bxac124
|View full text |Cite
|
Sign up to set email alerts
|

AWFC: Preventing Label Flipping Attacks Towards Federated Learning for Intelligent IoT

Abstract: Centralized machine learning methods require the aggregation of data collected from clients. Due to the awareness of data privacy, however, the aggregation of raw data collected by Internet of Things (IoT) devices is not feasible in many scenarios. Federated learning (FL), a kind of distributed learning framework, can be running on multiple IoT devices. It aims to resolve the issues of privacy leakage by training a model locally on the client-side, other than on the server-side that aggregates all the raw data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…It is an efective targeted method to attack FL system. Various methods are proposed to defend against poisoning attacks towards FL [43][44][45][46].…”
Section: Introductionmentioning
confidence: 99%
“…It is an efective targeted method to attack FL system. Various methods are proposed to defend against poisoning attacks towards FL [43][44][45][46].…”
Section: Introductionmentioning
confidence: 99%
“…To address the circumvention of traditional solutions, modern solutions have also been developed to address the limitations [13] [14] [15]. In [13], the authors propose a novel detection technique called AWFC which detects adversarial perturbations by identifying the difference of classes in the data.…”
Section: Related Workmentioning
confidence: 99%
“…To address the circumvention of traditional solutions, modern solutions have also been developed to address the limitations [13] [14] [15]. In [13], the authors propose a novel detection technique called AWFC which detects adversarial perturbations by identifying the difference of classes in the data. This method can be operationally expensive as the calculation of fully connected layer weights can be costly if a dataset has many features or classes.…”
Section: Related Workmentioning
confidence: 99%
“…Research in protection for NIDS against label-manipulation attacks is very limited and still, a new research area [21] [22]. In [21], the authors propose a novel detection technique called AWFC which detects flipped labels by identifying the difference of classes in the data. This method can be operationally expensive as the calculation of fully connected layer weights can be costly if a dataset has many features or classes.…”
Section: Related Workmentioning
confidence: 99%