2022
DOI: 10.1109/tii.2020.3036166
|View full text |Cite
|
Sign up to set email alerts
|

VFL: A Verifiable Federated Learning With Privacy-Preserving for Big Data in Industrial IoT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
78
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 153 publications
(78 citation statements)
references
References 26 publications
0
78
0
Order By: Relevance
“…In this way, FL helps users (i.e., smart home owners) to build home assistant solutions such as object detection and control their home by the coordination of distributed IoT devices in a privacy-preserving manner. Another project in [183] focuses on a verifiable FL platform to achieve efficient and secure model training in industrial IoT. A verification mechanism is built at the FL clients (i.e., industrial IoT devices) to verify the accuracy of the aggregated results based on the characteristics of Lagrange interpolation, which allows devices to detect forged results in the FL training.…”
Section: Model M2mentioning
confidence: 99%
“…In this way, FL helps users (i.e., smart home owners) to build home assistant solutions such as object detection and control their home by the coordination of distributed IoT devices in a privacy-preserving manner. Another project in [183] focuses on a verifiable FL platform to achieve efficient and secure model training in industrial IoT. A verification mechanism is built at the FL clients (i.e., industrial IoT devices) to verify the accuracy of the aggregated results based on the characteristics of Lagrange interpolation, which allows devices to detect forged results in the FL training.…”
Section: Model M2mentioning
confidence: 99%
“…There is also the problem of data leakage. The study [21] presents the danger of a malicious Federated Learning server that sends forged weights to participants, then analyze the plaintext weights that are sent back to expose their data. The authors then propose a weight encryption scheme that help clients individually find out whether the weights they get from the server are legitimate or not.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, research shows that parameter sharing with the server is sufficient for an attacker to infer knowledge of underlying data. Moreover, since the models are being trained locally, an adversary can attack local models on a client, eventually affecting the global model at the server [111]. Therefore, several efforts are also made to address these issues.…”
Section: Federated Learning and Iiotmentioning
confidence: 99%
“…The authors claim that using their approach, a client can share the data in cipher state with the server and the server can train the model using data in cipher state. Similarly, the authors of [111] provide a secure gradient aggregation framework. The authors in [113], [114] provide a comprehensive survey on the security and privacy of FL.…”
Section: Federated Learning and Iiotmentioning
confidence: 99%