2016
DOI: 10.48550/arxiv.1611.08207
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Texture Synthesis with Spatial Generative Adversarial Networks

Abstract: Generative adversarial networks (GANs) [7] are a recent approach to train generative models of data, which have been shown to work particularly well on image data. In the current paper we introduce a new model for texture synthesis based on GAN learning. By extending the input noise distribution space from a single vector to a whole spatial tensor, we create an architecture with properties well suited to the task of texture synthesis, which we call spatial GAN (SGAN). To our knowledge, this is the first succes… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
90
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 54 publications
(93 citation statements)
references
References 7 publications
1
90
0
Order By: Relevance
“…Generative Adversarial Networks (GANs), which have shown remarkable capabilities in image generation in multiple domains [49], [50], [51], can learn those features from data. Specifically, in texture synthesis, they have proven successful at generating new samples of textures from a single input image [12] or from a dataset of images [14], [15], [16], [17], [18]. We build upon the method of Zhou et al [12] which shows good performance on the synthesis of non-stationary single image textures, and extend it to synthesize texture stacks, as well as generate tileable outputs.…”
Section: Texture Synthesismentioning
confidence: 99%
See 1 more Smart Citation
“…Generative Adversarial Networks (GANs), which have shown remarkable capabilities in image generation in multiple domains [49], [50], [51], can learn those features from data. Specifically, in texture synthesis, they have proven successful at generating new samples of textures from a single input image [12] or from a dataset of images [14], [15], [16], [17], [18]. We build upon the method of Zhou et al [12] which shows good performance on the synthesis of non-stationary single image textures, and extend it to synthesize texture stacks, as well as generate tileable outputs.…”
Section: Texture Synthesismentioning
confidence: 99%
“…For this reason, such a process is frequently done manually by artists in 3D digitization pipelines. Recent advances in Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) have been applied to texture synthesis problems [12], [14], [15], [16], [17], [18], [19] showing unprecedented levels of realism and quality, however, the output of these methods is not tileable. Despite recent methods [13], [16], [20], [21], [22], [23] addressing the problem of tileable texture synthesis, we show that they either assume a particular level of regularity or the generated textures lose a significant amount of visual fidelity with respect to the input exemplars.…”
Section: Introductionmentioning
confidence: 99%
“…Infinite image generation. Existing works on infinite image generation mainly consider the generation of only texture-like and pattern-like images [27,5,14,39,55], making it similar to procedural generation [50,47]. SinGAN [56] learn a GAN model from a single image and is able to produce its (potentially unbounded) variations.…”
Section: Related Workmentioning
confidence: 99%
“…Most of the prior work on infinite image generation focused on the synthesis of homogeneous texture-like patterns [27,5,39] and did not explore the infinite generation of complex scenes, like nature or city landscapes. The critical challenge of generating such images compared to texture synthesis is making the produced frames globally consistent with one another: when a scene spans across several frames, they should all be conditioned on some shared information.…”
Section: Introductionmentioning
confidence: 99%
“…Learning Unconditional GANs from a Single Image. There are several work on learning GANs from a single texture image [2,22,34]. Recently, a SinGAN approach [46] has shown surprisingly good results on learning unconditional GANs from a single non-texture image.…”
Section: Related Work and Our Contributionsmentioning
confidence: 99%