2022
DOI: 10.48550/arxiv.2201.11692
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders

Abstract: Self-supervised learning is an emerging machine learning (ML) paradigm. Compared to supervised learning that leverages high-quality labeled datasets to achieve good performance, self-supervised learning relies on unlabeled datasets to pre-train powerful encoders which can then be treated as feature extractors for various downstream tasks. The huge amount of data and computational resources consumption makes the encoders themselves become a valuable intellectual property of the model owner. Recent research has … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…Recently, several backdoor injection and watermarking approaches have been proposed in the self-supervised learning domain. SSLGuard [12] proposes a watermarking approach to protect the IP of SSL pretrained encoders in computer vision tasks. It injects a secret keytuple into the encoders as the watermark and extracts the key from the output of the suspect encoder to verify the ownership by comparing the cosine similarity between the extracted key and the injected key.…”
Section: B Watermarking Approachesmentioning
confidence: 99%
See 3 more Smart Citations
“…Recently, several backdoor injection and watermarking approaches have been proposed in the self-supervised learning domain. SSLGuard [12] proposes a watermarking approach to protect the IP of SSL pretrained encoders in computer vision tasks. It injects a secret keytuple into the encoders as the watermark and extracts the key from the output of the suspect encoder to verify the ownership by comparing the cosine similarity between the extracted key and the injected key.…”
Section: B Watermarking Approachesmentioning
confidence: 99%
“…For instance, StolenEncoder [41] aims to utilize model extraction to steal the encoder in a scenario where a single encoder is provided as API services without the downstream classifiers. Therefore, most existing watermark approaches [31], [12] just verify the ownership of encoders against model extraction attacks without the downstream tasks. SSLGuard [12] aims to verify the ownership of encoders against model extraction attacks where encoders are provided as the API services, but it may not work when a downstream task's classifier is connected after the encoder.…”
Section: A Threat Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…The zero-bit watermark is often embedded into the model by enforcing the model to learn the mapping relationship between the carefully crafted samples and the pre-determined labels. However, with the rise of contrastive learning, the pretrained encoders are treated as feature extractors for various downstream tasks, and the pre-training process is relying on the SSL strategy rather than label-based supervised learning [26,27]. It indicates that traditional black-box watermarking techniques are not suitable for the pre-trained encoders in contrastive learning.…”
Section: Problem and Threatsmentioning
confidence: 99%