2019
DOI: 10.1145/3306346.3322993
|View full text |Cite
|
Sign up to set email alerts
|

TileGAN

Abstract: Fig. 1. TileGAN can synthesize large-scale textures with rich details. We show aerial images at different levels of detail generated using our framework, which allows for interactive texture editing. Our results contain a broad diversity of features at multiple scales and can be several hundreds of megapixels in size.We tackle the problem of texture synthesis in the setting where many input images are given and a large-scale output is required. We build on recent generative adversarial networks and propose two… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

4
5

Authors

Journals

citations
Cited by 54 publications
(15 citation statements)
references
References 40 publications
0
14
0
1
Order By: Relevance
“…Since their introduction in 2014, Generative Adversarial Networks (GANs) [14] have been successfully applied to various image synthesis tasks, e.g. image inpainting [48,11], image manipulation [52,5,1] and texture synthesis [29,43,12]. With continuous improvements on GAN architecture [40,25,38], loss function [33,4] and regularization [16,36,34], the images synthesized by GANs are becoming more and more realistic.…”
Section: Related Workmentioning
confidence: 99%
“…Since their introduction in 2014, Generative Adversarial Networks (GANs) [14] have been successfully applied to various image synthesis tasks, e.g. image inpainting [48,11], image manipulation [52,5,1] and texture synthesis [29,43,12]. With continuous improvements on GAN architecture [40,25,38], loss function [33,4] and regularization [16,36,34], the images synthesized by GANs are becoming more and more realistic.…”
Section: Related Workmentioning
confidence: 99%
“…Built on the understanding of how an object is "drawn" by the generator, they further designed a semantic image editing system that can add, remove or change the appearance of an object in an input image [3]. Concurrently, Frühstück et al [11] investigated the potential of activation tensor manipulation in image blending. Observing that boundary artifacts can be eliminated by by cropping and combining activation tensors at early layers of a generator, they proposed an algorithm to create large-scale texture maps of hundreds of megapixels by combining outputs of GANs trained on a lower resolution.…”
Section: Related Workmentioning
confidence: 99%
“…Due to the progressive architecture of StyleGAN, one can perform meaningful tensor operations at different layers of the network [11,4]. We consider the following editing operations: spatial copying, averaging, and channelwise copying.…”
Section: Activation Tensor Manipulationsmentioning
confidence: 99%
“…More recently, various learning-based methods have been proposed, including blending deep features instead of pixels [22,32,81] or designing loss functions based on deep features [95,96]. Generative Adversarial Networks (GAN) have also been used for image blending [10,23,47,77,88,101,105].…”
Section: Related Workmentioning
confidence: 99%