Fig. 1. TileGAN can synthesize large-scale textures with rich details. We show aerial images at different levels of detail generated using our framework, which allows for interactive texture editing. Our results contain a broad diversity of features at multiple scales and can be several hundreds of megapixels in size.We tackle the problem of texture synthesis in the setting where many input images are given and a large-scale output is required. We build on recent generative adversarial networks and propose two extensions in this paper. First, we propose an algorithm to combine outputs of GANs trained on a smaller resolution to produce a large-scale plausible texture map with virtually no boundary artifacts. Second, we propose a user interface to enable artistic control. Our quantitative and qualitative results showcase the generation of synthesized high-resolution maps consisting of up to hundreds of megapixels as a case in point.
Figure 1. InsetGAN applications. Our full-body human generator is able to generate reasonable bodies at state-of-the-art resolution (1024×1024px) (a). However, some artifacts appear in the synthesized results, most visibly in extremities and faces. We make use of a second, specialized generator to seamlessly improve the face region (b). We can also use a given face as an input for unconditional generation of bodies (c). Furthermore, we can select both specific faces and bodies and compose them in a seamlessly merged output (d).
Figure 1. InsetGAN applications. Our full-body human generator is able to generate reasonable bodies at state-of-the-art resolution (1024×1024px) (a). However, some artifacts appear in the synthesized results, most visibly in extremities and faces. We make use of a second, specialized generator to seamlessly improve the face region (b). We can also use a given face as an input for unconditional generation of bodies (c). Furthermore, we can select both specific faces and bodies and compose them in a seamlessly merged output (d).
Figure 1. We propose VIVE3D, a novel method that creates a powerful personalized 3D-aware generator using a low number of selected images of a target person. Given a new video of that person, we can faithfully modify several facial attributes as well as the camera viewpoint of the head crop. Finally, we seamlessly composite the edited face with the source frame in a temporally and spatially consistent manner, while retaining a plausible composition with the static components of the frame outside of the generator's region. The dotted squares in the center frame denote the reference regions for the three different camera poses in the column below.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.