2023
DOI: 10.48550/arxiv.2303.13516
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Ablating Concepts in Text-to-Image Diffusion Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 51 publications
0
5
0
Order By: Relevance
“…2. In Ablation (Kumari et al 2023), the erasing concept is mapped to a broader "anchor" concept. While their loss is effective, the effect of it leaks to nearby concepts, similar to Dreambooth (Ruiz et al 2023).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…2. In Ablation (Kumari et al 2023), the erasing concept is mapped to a broader "anchor" concept. While their loss is effective, the effect of it leaks to nearby concepts, similar to Dreambooth (Ruiz et al 2023).…”
Section: Related Workmentioning
confidence: 99%
“…Baselines. We compare the performance of our method with four different latest concept-erasing fine-tuning methods: ESD, SDD, "Ablating" (Kumari et al 2023), and Forget-Me-Not. Because of the applicability and the utility of a sexual-content censored model, our experiments are centered around erasing "nudity".…”
Section: Experiments Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…At the same time, methods based on machine unlearning have also shown promising results. Kumari et al (2023) train the hidden state of sentences containing specified concept be closer to those without such concept. This can remove the model's capability to generate specific concept.…”
Section: Related Workmentioning
confidence: 99%
“…However, this method relies on a filtering model pre-trained on specific concepts, which makes it challenging to detect open-vocabulary visual concepts. Finally, the third approach, refining models (Gandikota et al 2023; Kumari et al 2023), aims at fine-tuning the whole or the part of models to learn and satisfy the administrator's requirements, thus enhancing the model's ability to adhere to the desired guidelines and produce content that aligns with the established rules and policies. However, these methods are often limited by the biases of tuning data, making it difficult to achieve open-vocabulary capabilities.…”
Section: Introductionmentioning
confidence: 99%