2022
DOI: 10.1109/access.2022.3204037
|View full text |Cite
|
Sign up to set email alerts
|

A Survey and Guideline on Privacy Enhancing Technologies for Collaborative Machine Learning

Abstract: As machine learning and artificial intelligence (ML/AI) are becoming more popular and advanced, there is a wish to turn sensitive data into valuable information via ML/AI techniques revealing only data that is allowed by concerned parties or without revealing any information about the data to third parties. Collaborative ML approaches like federated learning (FL) help tackle these needs and concerns, bringing a way to use sensitive data without disclosing critically sensitive features of that data. In this pap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 123 publications
0
3
0
Order By: Relevance
“…This indicates that the future development of this field is closely related to these algorithms, and many scholars are still continuing research in this field. [4,5]…”
Section: Fig 3 the Results Of Co-citation Cluster Analysismentioning
confidence: 99%
“…This indicates that the future development of this field is closely related to these algorithms, and many scholars are still continuing research in this field. [4,5]…”
Section: Fig 3 the Results Of Co-citation Cluster Analysismentioning
confidence: 99%
“…Furthermore, with a cloud-native approach, the RAN and CN architectures can be streamlined, e.g., reduce some complexity by removing multiple processing points for a certain message and removing duplication of functionalities among functions [30]. Cloud-native technologies can enable the creation of cloudlets at the edge of the network, with application-to-application and function-to-function communications, which are capable to satisfy a large number of interconnected assets with flexible mesh topologies.…”
Section: End-to-end Architecturementioning
confidence: 99%
“…Second, the participants (i.e., clients) in the FL setting are able to modify the local model updates to alter the global model maliciously such as performing poisoning attacks [4,5]. A detailed analysis of security vulnerabilities and privacy threats in FL can be found in [6,7]. To overcome the first concern, i.e., sensitive information disclosure, privacy-enhancing technologies (PETs) such as Homomorphic Encryption (HE), Secure Multi-party Computation, Differential Privacy (DP), and Confidential Computing have been proposed [8,9,10], which prevent the server from accessing the original local model updates in cleartext so that the server cannot learn any information about the training data of the clients.…”
Section: Introductionmentioning
confidence: 99%