2023
DOI: 10.48550/arxiv.2302.04222
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GLAZE: Protecting Artists from Style Mimicry by Text-to-Image Models

Abstract: Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to displace many in the professional artist community. In particular, models can learn to mimic the artistic style of specific artists after "fine-tuning" on samples of their art. In this paper, we describe the design, implementation and evaluation of Glaze, a tool that enables artists to apply "style cloaks" to their art before sharing online. These cloaks apply barely perceptible perturbations to images, and when used as t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 42 publications
0
11
0
Order By: Relevance
“…Finally, the ability to learn artistic styles may be misused for copyright infringement. However, recent work [Shan et al 2023] has shown that it is possible to protect artwork from being copied by text-to-image generators, and we hope that future research in this direction could serve to mitigate such risks of infringement.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, the ability to learn artistic styles may be misused for copyright infringement. However, recent work [Shan et al 2023] has shown that it is possible to protect artwork from being copied by text-to-image generators, and we hope that future research in this direction could serve to mitigate such risks of infringement.…”
Section: Discussionmentioning
confidence: 99%
“…Evaluating the quality of text-to-image generation has been a challenging and ongoing research problem due to the subjective nature of image evaluation and the inherent gap between text and image modalities [49]. Specifically for evaluating the generation after fine-tuning, there are two aspects to consider: the model's ability to replicate target concepts and its controllability in modifying concepts using different textual prompts [18].…”
Section: Evaluation Of Text-to-image Generationmentioning
confidence: 99%
“…In this regard, both preventive and reactive measures need to be developed. For example, one preventive method [49] is to add toxic noises to the original images, designed to be visually negligible yet sufficient to mislead generative models with a substantial divergence in the semantic interpretation of the content. Regarding reactive measures, researchers are expected to develop more robust models to dissect the distribution differences between generative and real data, enabling discrimination and effective data governance [8,14].…”
Section: Customized Aigc: Ethical Risks and Responsive Strategiesmentioning
confidence: 99%
“…Current methods can synthesize high-quality images with remarkable generalization ability, capable of composing different instances, styles, and concepts in unseen contexts. However, as these models are often trained on copyright images, it learns to mimic various artist styles [64,61] and other copyrighted content [10].…”
Section: Related Workmentioning
confidence: 99%