2021
DOI: 10.48550/arxiv.2108.01073
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
92
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 74 publications
(112 citation statements)
references
References 49 publications
1
92
0
Order By: Relevance
“…The generated samples are realistic and diverse, while the conditioning in the stroke paintings is faithfully preserved. Compared to Meng et al (2021b), our model enjoys a 1100× speedup in generation, as it takes only 0.16s to generate one image at 256 resolution vs. 181s for Meng et al (2021b). This experiment confirms that our proposed model enables the application of diffusion models to interactive applications such as image editing.…”
Section: Additional Studiessupporting
confidence: 67%
See 1 more Smart Citation
“…The generated samples are realistic and diverse, while the conditioning in the stroke paintings is faithfully preserved. Compared to Meng et al (2021b), our model enjoys a 1100× speedup in generation, as it takes only 0.16s to generate one image at 256 resolution vs. 181s for Meng et al (2021b). This experiment confirms that our proposed model enables the application of diffusion models to interactive applications such as image editing.…”
Section: Additional Studiessupporting
confidence: 67%
“…Stroke-based image synthesis: Recently, Meng et al (2021b) propose an interesting application of diffusion models to stroke-based generation. Specifically, they perturb a stroke painting by the forward diffusion process, and denoise it with a diffusion model.…”
Section: Additional Studiesmentioning
confidence: 99%
“…The model can even match styles when editing objects into paintings. We also experiment with SDEdit (Meng et al, 2021) in Figure 4, finding that our model is capable of turning sketches into realistic image edits. In Figure 3 we show how we can use GLIDE iteratively to produce a complex scene using a zero-shot generation followed by a series of inpainting edits.…”
Section: Qualitative Resultsmentioning
confidence: 99%
“…Most previous work that uses diffusion models for inpainting has not trained diffusion models explicitly for this task (Sohl-Dickstein et al, 2015;Meng et al, 2021). In particular, diffusion model inpainting can be performed by sampling from the diffusion model as usual, but replacing the known region of the image with a sample from q(x t |x 0 ) after each sampling step.…”
Section: Image Inpaintingmentioning
confidence: 99%
“…Interestingly, diffusion models can go beyond unconditional image synthesis, and have been applied to conditional image generation, including super-resolution [5,17,25], inpainting [30,33], MRI reconstruction [6,13,32], image translation [5,19,27], and so on. One line of works redesigns the diffusion model specifically suitable for the task at hand, thereby achieving remarkable performance on the given task [17,25,27].…”
Section: Introductionmentioning
confidence: 99%