2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00241
|View full text |Cite
|
Sign up to set email alerts
|

Real-World Super-Resolution via Kernel Estimation and Noise Injection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
278
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 282 publications
(278 citation statements)
references
References 37 publications
0
278
0
Order By: Relevance
“…In order to ensure that the degraded images have a similar noise distribution as the source images , we extract the noise mapping patches directly from the source images in the training dataset. Due to the large variance of the patches with rick contents [38], and inspired by [40,45], when extracting noise mapping patches we control the variance within a specific range under the condition:…”
Section: Generation and Injection Of Noisementioning
confidence: 99%
See 3 more Smart Citations
“…In order to ensure that the degraded images have a similar noise distribution as the source images , we extract the noise mapping patches directly from the source images in the training dataset. Due to the large variance of the patches with rick contents [38], and inspired by [40,45], when extracting noise mapping patches we control the variance within a specific range under the condition:…”
Section: Generation and Injection Of Noisementioning
confidence: 99%
“…To sum up, the process of generating LR images in ROI_LR from the source images in ROI_Src can be expressed as Equation ( 10), where i and j are randomly selected: = ( * ) ↓ + (10) [26] model. Because ESRGAN discriminator may introduce more artifacts [38], SR-D is designed on the basis of PatchGAN [44] model. The perceptual feature extractor is designed on the basis of VGG-19 [46], so as to introduce the perceptual loss [47] to enhance the visual effect of low-frequency features of the images.…”
Section: Generation and Injection Of Noisementioning
confidence: 99%
See 2 more Smart Citations
“…2) For the perceptual track, the second stage was trained using the following combined loss: where λ 1 is 1e-3, λ 2 is 1 and λ 3 is 5e-3. They used a patch discriminator as in [18]. The rest of the hyperparameters are the same as in [49].…”
Section: Kirinukmentioning
confidence: 99%