2020
DOI: 10.48550/arxiv.2010.05821
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Open-sourced Dataset Protection via Backdoor Watermarking

Yiming Li,
Ziqi Zhang,
Jiawang Bai
et al.

Abstract: The rapid development of deep learning has benefited from the release of some high-quality open-sourced datasets (e.g., ImageNet), which allows researchers to easily verify the effectiveness of their algorithms. Almost all existing opensourced datasets require that they can only be adopted for academic or educational purposes rather than commercial purposes, whereas there is still no good way to protect them. In this paper, we propose a backdoor embedding based dataset watermarking method to protect an open-so… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Solutions requiring dataset-level modifications. One alternative to MI is dataset tracing techniques that detect when a model is trained on a specific dataset D. Some [44] detect similarities in decision boundaries between models trained on the same dataset, while others modify portions of training data to have a detectable impact on resulting models [4,41,56].…”
Section: User Data-level Modificationsmentioning
confidence: 99%
“…Solutions requiring dataset-level modifications. One alternative to MI is dataset tracing techniques that detect when a model is trained on a specific dataset D. Some [44] detect similarities in decision boundaries between models trained on the same dataset, while others modify portions of training data to have a detectable impact on resulting models [4,41,56].…”
Section: User Data-level Modificationsmentioning
confidence: 99%
“…8, gradient-based adversarial attack methods aim to generate adversarial perturbations that are farthest from the decision boundary within the specified perturbation range. On the other hand, optimizationbased methods aim to minimize the size of the adversarial perturbation, i.e., the distance between the adversarial and (a) Pixel [16] (b) Watermark [84] (c) Trigger [85] (d) Patch [86] (e) Viewpoint [87] (f) Style [88] (g) Erosion [89] (h) Sticker [72] (i) Light [90] (j) Laser [91] (k) Color [92] (l) Zoom [93] (m) Texture [94] (n) 3D object [95] (o) Projection [96] (p) Makeup [97] (q) PS [98] (r) Location [99] Fig. 6: Adversarial perturbations in different forms.…”
Section: A Background Knowledgementioning
confidence: 99%
“…Finally, we summarize digital attacks against image classification ( [15], [16], [19], [20], [60]- [63], [79], [82], [84], [85], [89], [100], [103], [104], [108]- [153]) in Table III.…”
Section: Black-box Attacksmentioning
confidence: 99%