2018
DOI: 10.1515/popets-2018-0024
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-preserving Machine Learning as a Service

Abstract: Machine learning algorithms based on deep Neural Networks (NN) have achieved remarkable results and are being extensively used in different domains. On the other hand, with increasing growth of cloud services, several Machine Learning as a Service (MLaaS) are offered where training and deploying machine learning models are performed on cloud providers’ infrastructure. However, machine learning algorithms require access to the raw data which is often privacy sensitive and can create potential security and priva… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
107
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 189 publications
(115 citation statements)
references
References 29 publications
1
107
0
Order By: Relevance
“…The CryptoDL framework [121,122], implemented using HElib [72], approximates both the tanh and sigmoid functions with Chebyshev polynomials [122]. Rather than directly approximating ReLU, the authors approximate its derivative, and calculate its antiderivative for use as an activation function.…”
Section: Private Deep Learningmentioning
confidence: 99%
“…The CryptoDL framework [121,122], implemented using HElib [72], approximates both the tanh and sigmoid functions with Chebyshev polynomials [122]. Rather than directly approximating ReLU, the authors approximate its derivative, and calculate its antiderivative for use as an activation function.…”
Section: Private Deep Learningmentioning
confidence: 99%
“…In centralised learning, the service provider collects all users' data to train a model. To preserve privacy, local data anonymisation [7], [8] or data encryption [14] can be performed on the user side, prior to transmission to the service provider. However, the curse of dimensionality may render these approaches impractical [15].…”
Section: Introductionmentioning
confidence: 99%
“…However, the curse of dimensionality may render these approaches impractical [15]. In addition to this, encryption may be needed to approximate the model, thus reducing accuracy [14]. In distributed learning, the service provider trains a model by iteratively aggregating the parameters or gradients of models trained locally by users [9]- [11].…”
Section: Introductionmentioning
confidence: 99%
“…If such models are deployed e.g. on smartphones [5] or as a service [4], they give attackers access to the memorized sensitive information.…”
Section: Introductionmentioning
confidence: 99%