2018
DOI: 10.48550/arxiv.1808.00590
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MLCapsule: Guarded Offline Deployment of Machine Learning as a Service

Abstract: With the widespread use of machine learning (ML) techniques, ML as a service has become increasingly popular. In this setting, an ML model resides on a server and users can query it with their data via an API. However, if the user's input is sensitive, sending it to the server is undesirable and sometimes even legally not possible. Equally, the service provider does not want to share the model by sending it to the client for protecting its intellectual property and pay-per-query business model.In this paper, w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(17 citation statements)
references
References 20 publications
0
17
0
Order By: Relevance
“…There has been some work on defending the ML models against stealing of inputs and parameters using other techniques like Homomorphic Encryption (HE) and Secure Multi-Party Computation (SMPC) [27,56,73], Watermarking [1,76], and Trusted Execution Engines (TEE) [32,38,88]. We refer the interested readers to survey papers published on ML privacy for a more exhaustive list [10,39,63,81].…”
Section: Orthogonal ML Defencesmentioning
confidence: 99%
“…There has been some work on defending the ML models against stealing of inputs and parameters using other techniques like Homomorphic Encryption (HE) and Secure Multi-Party Computation (SMPC) [27,56,73], Watermarking [1,76], and Trusted Execution Engines (TEE) [32,38,88]. We refer the interested readers to survey papers published on ML privacy for a more exhaustive list [10,39,63,81].…”
Section: Orthogonal ML Defencesmentioning
confidence: 99%
“…Model hyper-parameters such as model architecture, number of layers, size of training batches or type of training data are usually public information, as they do not leak any information about trained model parameters or sensitive training data [15], [17], [19]. In order to mitigate possible threats linked to malicious data sources, PLINIUS supports secure provisioning of model hyper-parameters via the SGX remote attestation mechanism.…”
Section: Threat Modelmentioning
confidence: 99%
“…In this mechanism there is a trade-off between the privacy performance, computational complexity and, model accuracy [25,26,27]. Our idea relies on TEEs to perform private inference or training in a secure hardware [5,6,7,8,9,10]. In particular, TEE-based training solutions focus on holding the entire model within the TEE environment which Inference FHME [19], MiniONN [20], CryptoNets [21],…”
Section: Related Workmentioning
confidence: 99%
“…Gazelle [22] SGXCMP [23], SecureML [24] Mlcapsule [7], ObliviousTEE [8], P-TEE [9], Slalom [10] Arden [25], NOffload [26], Shredder [27] Training SecureML [24], SecureNN [28],…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation