2021
DOI: 10.1016/j.comcom.2021.02.014
|View full text |Cite
|
Sign up to set email alerts
|

Privacy preserving distributed machine learning with federated learning

Abstract: Edge computing and distributed machine learning have advanced to a level that can revolutionize a particular organization. Distributed devices such as the Internet of Things (IoT) often produce a large amount of data, eventually resulting in big data that can be vital in uncovering hidden patterns, and other insights in numerous fields such as healthcare, banking, and policing. Data related to areas such

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 92 publications
(48 citation statements)
references
References 49 publications
0
47
0
1
Order By: Relevance
“…This segregated collection contributes to anonymizing data because it disassociates confidential attributes from original quasi-identifiers. Unlike previous protocols that output k-anonymized data, Chamikara et al [35] presents a perturbative mashup protocol that provides noisy anonymized data to train distributed machine learning models. Data perturbation is caused by geometric data transformations, randomized expansion noise addition, and data shuffling.…”
Section: Privacy-preserving Data Mashupmentioning
confidence: 99%
“…This segregated collection contributes to anonymizing data because it disassociates confidential attributes from original quasi-identifiers. Unlike previous protocols that output k-anonymized data, Chamikara et al [35] presents a perturbative mashup protocol that provides noisy anonymized data to train distributed machine learning models. Data perturbation is caused by geometric data transformations, randomized expansion noise addition, and data shuffling.…”
Section: Privacy-preserving Data Mashupmentioning
confidence: 99%
“…Additive perturbation-based PPFL methods. Additive perturbation-based FL methods aim to preserve privacy by adding random noise to weight updates or gradient updates [19,52,63,69,70,78,110,172,193,198]. In some methods [52,78,193], random noise was added to weight updates to achieve privacy-preserving in the training process, whereas in other methods [69,70,172,198], random noise was added to the gradient updates.…”
Section: Encryption-based Ppflmentioning
confidence: 99%
“…In order to adjust the characteristics and the number of trees, w is the weight value and L is the original loss function. In the original distributed ML, joint modeling is realized by sending F(x) to participants, but distributed ML can use F(x) to calculate data labels backward, resulting in data leakage, which does not meet the basic requirements of FL in principle [27]. e federated tree model is based on the Secure Boost [26] encryption algorithm, training the samples of the model that needs joint training, and the first sample and the second sample are trained to get the prediction model of the decision tree.…”
Section: Federated Decision Treementioning
confidence: 99%