2018
DOI: 10.1007/s13735-018-0147-1
|View full text |Cite
|
Sign up to set email alerts
|

Digital watermarking for deep neural networks

Abstract: Although deep neural networks have made tremendous progress in the area of multimedia representation, training neural models requires a large amount of data and time. It is well-known that utilizing trained models as initial weights often achieves lower training error than neural networks that are not pre-trained. A fine-tuning step helps to reduce both the computational cost and improve performance. Therefore, sharing trained models has been very important for the rapid progress of research and development. I… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
102
0
2

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 124 publications
(105 citation statements)
references
References 37 publications
1
102
0
2
Order By: Relevance
“…After extension to surprising domains such as network science [40], the extension to watermarking neural networks as objects themselves is new, following the need to protect the valuable assets of today's state of the art machine learning techniques. Uchida et al [35,22] thus propose the watermarking of neural networks, by embedding information in the learned weights. Authors show in the case of convolutional architectures that this embedding does not significantly change the distribution of parameters in the model.…”
Section: Related Workmentioning
confidence: 99%
“…After extension to surprising domains such as network science [40], the extension to watermarking neural networks as objects themselves is new, following the need to protect the valuable assets of today's state of the art machine learning techniques. Uchida et al [35,22] thus propose the watermarking of neural networks, by embedding information in the learned weights. Authors show in the case of convolutional architectures that this embedding does not significantly change the distribution of parameters in the model.…”
Section: Related Workmentioning
confidence: 99%
“…Algorithm 1 provides the complete composite datageneration method, which is run at the beginning of each epoch. Figure 1 is an illustration of composite data samples created by Algorithm 1. for i=1 to N do 11: i 1 = math.random(len(dataArr)) 12: i 2 = math.random(len(dataArr)) 13: p = math.random(100)/100 14:…”
Section: B Composite Data Generationmentioning
confidence: 99%
“…Watermarking techniques [19,23] embed information into the target model weights in order to mark its provenance. Since work in [19] operated on the MNIST dataset, and provided detailed parameters, we implemented this watermarking technique on the same models (MLP, CNN, and IRNN).…”
Section: Watermarking Attackmentioning
confidence: 99%
“…Yet, very recently, other types of attacks were shown to operate by modifying the model itself, by embedding information in the model weight matrices. This is the case of new watermarking techniques, that aim at embedding watermarks into the model, in order to prove model ownership [19,23], or of trojaning attacks [4,21] that empowers the attacker to trigger specific model behaviors. Figure 1: Illustration of a model tampering and its impact on the decision boundaries.…”
Section: Introductionmentioning
confidence: 99%