2022
DOI: 10.48550/arxiv.2210.07809
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural Networks

Abstract: Watermarking has been widely adopted for protecting the intellectual property (IP) of Deep Neural Networks (DNN) to defend the unauthorized distribution. Unfortunately, the popular datapoisoning DNN watermarking scheme relies on target model finetuning to embed watermarks, which limits its practical applications in tackling real-world tasks. Specifically, the learning of watermarks via tedious model fine-tuning on a poisoned dataset (carefullycrafted sample-label pairs) is not efficient in tackling the tasks o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 32 publications
0
1
0
Order By: Relevance
“…Researchers have already highlighted that adversaries can stealthily and efficiently steal DNN models [17,18]. In cases where the adversary possesses greater knowledge about the model or when the model parameters are publicly available, two commonly employed white-box attacks are unauthorized finetuning and pruning [19]. Furthermore, recent studies have shown that even under a black-box attack scenario [17,20], the core functionalities of a DNN model can still be extracted.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers have already highlighted that adversaries can stealthily and efficiently steal DNN models [17,18]. In cases where the adversary possesses greater knowledge about the model or when the model parameters are publicly available, two commonly employed white-box attacks are unauthorized finetuning and pruning [19]. Furthermore, recent studies have shown that even under a black-box attack scenario [17,20], the core functionalities of a DNN model can still be extracted.…”
Section: Introductionmentioning
confidence: 99%