Proceedings of the 34th Annual Computer Security Applications Conference 2018
DOI: 10.1145/3274694.3274740
|View full text |Cite
|
Sign up to set email alerts
|

Model Extraction Warning in MLaaS Paradigm

Abstract: Cloud vendors are increasingly offering machine learning services as part of their platform and services portfolios. These services enable the deployment of machine learning models on the cloud that are offered on a pay-per-query basis to application developers and end users. However recent work has shown that the hosted models are susceptible to extraction attacks. Adversaries may launch queries to steal the model and compromise future query payments or privacy of the training data. In this work, we present a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
66
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 105 publications
(66 citation statements)
references
References 15 publications
0
66
0
Order By: Relevance
“…It is a white-box attack which produces an extracted model f with the same hyperparameters and architectures as the original model f . To compare f and f , we adopt extraction rate [1], [6] to measure the proportion of matching predictions (i.e., both f and f predict the same label) in an evaluation query set. Formally,…”
Section: Attack and Evaluation Metricsmentioning
confidence: 99%
See 2 more Smart Citations
“…It is a white-box attack which produces an extracted model f with the same hyperparameters and architectures as the original model f . To compare f and f , we adopt extraction rate [1], [6] to measure the proportion of matching predictions (i.e., both f and f predict the same label) in an evaluation query set. Formally,…”
Section: Attack and Evaluation Metricsmentioning
confidence: 99%
“…Lee et al [27] proposed perturbations using the mechanism of reverse sigmoid to inject deceptive noises to output confidence, which preserved the validity of top and bottom rank labels. Kesarwani et al [6] monitored user-server streams to evaluate the threat level of model extraction with two strategies based on entropy and compact model summaries. The former derived information gain with a decision tree while the latter measured feature coverage of the input space partitioned by source model, both of which were highly correlated to extraction level.…”
Section: Model Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…• Exploration attacks (Sethi and Kantardzic, 2018); • Model extraction attacks (Correia-Silva et al, 2018;Kesarwani et al, 2018;Joshi and Tammana, 2019;Reith et al, 2019); • Model inversion attacks (Yang et al, 2019); • Model-reuse attacks (Ji et al, 2018); • Trojan attacks (Liu et al, 2018).…”
Section: Attacks On Cloud-hosted Machine Learning Models: Thematic Anmentioning
confidence: 99%
“…This substitute model is trained using queries issued by the attacker and their responses obtained by the target classifier as training data. Constructing substitute classifiers is known as model extraction attacks [19], [21], [22]. Although the authors of [21] do not refer to adversarial examples, subsequent research results [19], [22] discuss the relation between model extraction attacks and black-box adversarial examples generation.…”
Section: Generation With Limited Informationmentioning
confidence: 99%