2022
DOI: 10.48550/arxiv.2201.09243
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Increasing the Cost of Model Extraction with Calibrated Proof of Work

Abstract: In model extraction attacks, adversaries can steal a machine learning model exposed via a public API by repeatedly querying it and adjusting their own model based on obtained predictions. To prevent model stealing, existing defenses focus on detecting malicious queries, truncating, or distorting outputs, thus necessarily introducing a tradeoff between robustness and model utility for legitimate users. Instead, we propose to impede model extraction by requiring users to complete a proof-of-work before they can … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…Proactive defense is a technique that increases the attacker's burden by imposing some form of cost on the users who query the model. For example, there are techniques that use Proof of Work [11,12], which requires computation, to demand costs in terms of electricity or time from attackers. This technique makes it difficult for attackers to acquire many input-output relationships at a low cost and is a method that does not affect output accuracy for non-attackers.…”
Section: Introductionmentioning
confidence: 99%
“…Proactive defense is a technique that increases the attacker's burden by imposing some form of cost on the users who query the model. For example, there are techniques that use Proof of Work [11,12], which requires computation, to demand costs in terms of electricity or time from attackers. This technique makes it difficult for attackers to acquire many input-output relationships at a low cost and is a method that does not affect output accuracy for non-attackers.…”
Section: Introductionmentioning
confidence: 99%