2019
DOI: 10.1007/978-3-030-11021-5_6
|View full text |Cite
|
Sign up to set email alerts
|

The Unreasonable Effectiveness of Texture Transfer for Single Image Super-Resolution

Abstract: While implicit generative models such as GANs have shown impressive results in high quality image reconstruction and manipulation using a combination of various losses, we consider a simpler approach leading to surprisingly strong results. We show that texture loss[1] alone allows the generation of perceptually high quality images. We provide a better understanding of texture constraining mechanism and develop a novel semantically guided texture constraining method for further improvement. Using a recently dev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
19
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 27 publications
(20 citation statements)
references
References 59 publications
(132 reference statements)
1
19
0
Order By: Relevance
“…For the for proposed SNN model, we are making the given set of datasets into train and test by the general division ratio either 80:20. To test the logic we are going to use 20% of data, and that will be given as input to the network, based on the train data the image is going to be Inpainted the missing pixels [20][21][22].…”
Section: Results and Analysismentioning
confidence: 99%
“…For the for proposed SNN model, we are making the given set of datasets into train and test by the general division ratio either 80:20. To test the logic we are going to use 20% of data, and that will be given as input to the network, based on the train data the image is going to be Inpainted the missing pixels [20][21][22].…”
Section: Results and Analysismentioning
confidence: 99%
“…The higher the PSNR and SSIM, the less noise the image has. In addition, this article also refers to learned perceptual image patch similarity (LPIPS) [45,46], which we describe as Perceptual Similarity. In this paper, CNN features are used to represent the visual perception of images.…”
Section: Evaluation Methodsmentioning
confidence: 99%
“…The neural networks of the generative model and the discriminative model used in this study are similar to those of Edge Connect. In detail, a generative model follows the super-resolution architecture [21,22], and the discriminative model follows a 70 × 70 patch GAN architecture [23,24].…”
Section: Training Ganmentioning
confidence: 99%