2021
DOI: 10.48550/arxiv.2107.10833
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Abstract: https://github.com/xinntao/Real-ESRGAN Bicubic ESRGAN RealSR Real-ESRGAN Figure 1: Comparisons of bicubic-upsampled, ESRGAN [44], RealSR [17], and our Real-ESRGAN results on real-life images. The Real-ESRGAN model trained with pure synthetic data is capable of enhancing details while removing annoying artifacts for common real-world images.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(31 citation statements)
references
References 55 publications
0
31
0
Order By: Relevance
“…To test the performance of SwinIR for realworld SR, we re-train SwinIR by using the same degradation model as BSRGAN for low-quality image synthesis. Since there is no ground-truth high-quality images, we only provide visual comparison with representative bicubic model ESRGAN [81] and state-of-the-art realworld image SR models RealSR [37], BSRGAN [89] and Real-ESRGAN [80]. As shown in Fig.…”
Section: Results On Image Srmentioning
confidence: 99%
See 1 more Smart Citation
“…To test the performance of SwinIR for realworld SR, we re-train SwinIR by using the same degradation model as BSRGAN for low-quality image synthesis. Since there is no ground-truth high-quality images, we only provide visual comparison with representative bicubic model ESRGAN [81] and state-of-the-art realworld image SR models RealSR [37], BSRGAN [89] and Real-ESRGAN [80]. As shown in Fig.…”
Section: Results On Image Srmentioning
confidence: 99%
“…For classical and lightweight image SR, we only use the naive L 1 pixel loss as same as previous work to show the effectiveness of the proposed network. For real-world image SR, we use a combination of pixel loss, GAN loss and perceptual loss [81,89,80,27,39,81] to improve visual quality.…”
Section: Methodsmentioning
confidence: 99%
“…However, the reconstruction accuracy of the above methods greatly depends on the accuracy of the degradation mode estimation. To address this issue, more implicit degradation modeling methods are proposed [35], [141], [142], which aim to implicitly learn the potential degradation modes by the external datasets.…”
Section: Blind Sisrmentioning
confidence: 99%
“…IKC [14] and DAN [25] further propose to jointly perform degradation prediction and restoration in an iterative manner. Recent works also attempt to address blind SR within a one-branch networks [40,47]. There are few related works investigating the relationship between two-branch and one-branch networks for blind SR.…”
Section: Related Workmentioning
confidence: 99%