2020
DOI: 10.1609/aaai.v34i07.6976
|View full text |Cite
|
Sign up to set email alerts
|

Model Watermarking for Image Processing Networks

Abstract: Deep learning has achieved tremendous success in numerous industrial applications. As training a good model often needs massive high-quality data and computation resources, the learned models often have significant business values. However, these valuable deep models are exposed to a huge risk of infringements. For example, if the attacker has the full information of one target model including the network structure and weights, the model can be easily finetuned on new datasets. Even if the attacker can only ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
85
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
3

Relationship

2
8

Authors

Journals

citations
Cited by 91 publications
(85 citation statements)
references
References 27 publications
0
85
0
Order By: Relevance
“…In recent years, information hiding about DNN has become a popular research issue [18,27,[31][32][33][34][35][36][37]. Kandi et al [6] proposed an innovative learning-based autoencoder convolutional neural network (CNN) for nonblind watermarking, which adds an additional dimension to the use of CNNs for secrecy and outperforms methods using traditional transformations in terms of both agnosticism and robustness.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, information hiding about DNN has become a popular research issue [18,27,[31][32][33][34][35][36][37]. Kandi et al [6] proposed an innovative learning-based autoencoder convolutional neural network (CNN) for nonblind watermarking, which adds an additional dimension to the use of CNNs for secrecy and outperforms methods using traditional transformations in terms of both agnosticism and robustness.…”
Section: Introductionmentioning
confidence: 99%
“…However, it is often difficult to explicitly define invisibility. As shown in existing information hiding methods [34,35,36,37], though L p Norm is not a perfect invisibility metric, it can serves as a good invisibility learning metric. So we utilize it as one invisibility loss:…”
Section: Deep Invisible Injection Strategymentioning
confidence: 99%
“…introduces a passport-based ownership verification concerned with inference performance against ambiguity attacks. Data-embedding schemes take carefully crafted sample-label pairs as watermarks and embed their correlation into DL models (Adi et al, 2018;Le Merrer et al, 2020;Zhang et al, 2020). For example, Adi et al (2018) construct watermarks using backdoors that can preserve the functionality of watermarked models.…”
Section: Related Workmentioning
confidence: 99%