2020
DOI: 10.48550/arxiv.2009.09560
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…b) Knockoff (Knockoff Nets [32]) works with an auxiliary dataset that shares similar attributes as the original training data used to train the victim model. c) ESA (ES Attack [45]) requires no additional data but a 2) Failure of watermarking: Our experiments in Section V-B show the effectiveness and robustness of watermarking to finetuning and pruning attacks. Unfortunately, here we show that the embedded watermarks can be removed by model extraction attacks.…”
Section: Defending Against Model Extractionmentioning
confidence: 97%
See 4 more Smart Citations
“…b) Knockoff (Knockoff Nets [32]) works with an auxiliary dataset that shares similar attributes as the original training data used to train the victim model. c) ESA (ES Attack [45]) requires no additional data but a 2) Failure of watermarking: Our experiments in Section V-B show the effectiveness and robustness of watermarking to finetuning and pruning attacks. Unfortunately, here we show that the embedded watermarks can be removed by model extraction attacks.…”
Section: Defending Against Model Extractionmentioning
confidence: 97%
“…The adversary may be aware of the architecture of the victim model but has no knowledge of the training data or model parameters. The goal of model extraction adversaries is to accurately steal the functionality of the victim model through the prediction API [21], [38], [33], [32], [45]. To achieve this, the adversary first obtains an annotated dataset by querying the victim model for a set of auxiliary samples, then trains a copy of the victim model on the annotated dataset.…”
Section: Dnn Copyright Threat Modelmentioning
confidence: 99%
See 3 more Smart Citations