2021
DOI: 10.1109/jiot.2021.3060764
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Preserving Machine Learning Training in IoT Aggregation Scenarios

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(21 citation statements)
references
References 47 publications
1
20
0
Order By: Relevance
“…From their analysis, they propose the combination of the following techniques: (1) ML techniques to provide user privacy protection, (2) policy languages to specify user privacy preferences and to express complex policies, (3) Use of negotiation techniques to provide better services to users while preserving their privacy. Zhu et al [ 26 ] review privacy-preserving ML works differentiating between privacy-preserving prediction solutions and training solutions. In the prediction area, the solutions adopt differential privacy, secure multi-party computation, and homomorphic encryption.…”
Section: Related Workmentioning
confidence: 99%
“…From their analysis, they propose the combination of the following techniques: (1) ML techniques to provide user privacy protection, (2) policy languages to specify user privacy preferences and to express complex policies, (3) Use of negotiation techniques to provide better services to users while preserving their privacy. Zhu et al [ 26 ] review privacy-preserving ML works differentiating between privacy-preserving prediction solutions and training solutions. In the prediction area, the solutions adopt differential privacy, secure multi-party computation, and homomorphic encryption.…”
Section: Related Workmentioning
confidence: 99%
“…Amazon SageMaker from Amazon Web Service (AWS) provides similar services, as introduced in [112]. Zhu et al propose a machine learning training framework that supports homomorphic encryption in [113].…”
Section: Homomorphic Encryptionmentioning
confidence: 99%
“…Terefore, frequent interaction leads to a high communication overhead of federated learning [4]. In addition, the goal of federated learning is to train a global model for each data provider, which is diferent from ours [5]. In this paper, the goal of our proposed framework is to train a private model for SSP by using the ciphertext data of data providers (fog nodes).…”
Section: Introductionmentioning
confidence: 99%
“…In [6][7][8], they all adopt the dual cloud server model and assume that the two cloud servers do not collude. However, this assumption has potential security hazards [5].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation