2023
DOI: 10.1109/tii.2021.3128164
|View full text |Cite
|
Sign up to set email alerts
|

Byzantine-Robust Aggregation in Federated Learning Empowered Industrial IoT

Abstract: Federated Learning (FL) is a promising paradigm to empower on-device intelligence in Industrial Internet of Things (IIoT) due to its capability of training machine learning models across multiple IIoT devices, while preserving the privacy of their local data. However, the distributed architecture of FL relies on aggregating the parameter list from the remote devices, which poses potential security risks caused by malicious devices. In this paper, we propose a flexible and robust aggregation rule, called Auto-w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(18 citation statements)
references
References 34 publications
0
18
0
Order By: Relevance
“…In terms of Byzantine attacks, most existing literature for distributed learning and federated learning focuses on convergence prevention [7], [19], [21], [24]. As illustrated in Fig.…”
Section: Threat Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…In terms of Byzantine attacks, most existing literature for distributed learning and federated learning focuses on convergence prevention [7], [19], [21], [24]. As illustrated in Fig.…”
Section: Threat Modelsmentioning
confidence: 99%
“…In recent years, a number of Byzantine-robust techniques have been proposed [9]. They can be classified into three categories: redundancy-based schemes that assign each client redundant updates and use this redundancy to eliminate the effect of Byzantine failures [10], [11], [12], [13]; trust-based schemes that assume some of the clients or datasets are trusted for filtering and re-weighting the local model updates [14], [15], [16]; robust aggregation schemes that estimate the updates according to some robust aggregation algorithms [8], [17], [18], [19], [20], [21]. For the first category, redundancy-based schemes, in the worst case, require each node to compute Ω(M ) times more updates, where M is the number of Byzantine clients [10].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A straightforward attack is to sample some random noise from a distribution (e.g., Gaussian distribution) and add it to the updates before uploading [19], [28]. For simplicity sake, the mean and variance of the noise are both 0.1 in our experiment.…”
Section: Noisementioning
confidence: 99%
“…Furthermore, Noise and IPM (ϵ = 100) eventually damage the models under the four settings by decreasing the test accuracy to around 10% (no better than random guess). This is because they both make large changes to the updates, and Mean could be easily biased by large changes [19].…”
Section: Impact On the Mean Schemementioning
confidence: 99%