2020
DOI: 10.1007/978-3-030-67070-2_24
|View full text |Cite
|
Sign up to set email alerts
|

AIM 2020 Challenge on Real Image Super-Resolution: Methods and Results

Abstract: This paper introduces the real image Super-Resolution (SR) challenge that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2020. This challenge involves three tracks to super-resolve an input image for ×2, ×3 and ×4 scaling factors, respectively. The goal is to attract more attention to realistic image degradation for the SR task, which is much more complicated and challenging, and contributes to real-world image super-resolution applications. 452 participants were r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 34 publications
(33 citation statements)
references
References 51 publications
0
33
0
Order By: Relevance
“…Although so many models have been put forward in blind SR, there is still a long way to go since we have only tackled a small set of real-world images. Existing methods often claim to focus on real-world settings, but they actually assume a certain scene, like images taken by some digital cameras [17], [18]. In fact, real-world images are greatly different in their underlying degradation types, and an SR model designed for a specific type can easily fail for another.…”
Section: Arbitrary Lr Input Domain Gapmentioning
confidence: 99%
See 1 more Smart Citation
“…Although so many models have been put forward in blind SR, there is still a long way to go since we have only tackled a small set of real-world images. Existing methods often claim to focus on real-world settings, but they actually assume a certain scene, like images taken by some digital cameras [17], [18]. In fact, real-world images are greatly different in their underlying degradation types, and an SR model designed for a specific type can easily fail for another.…”
Section: Arbitrary Lr Input Domain Gapmentioning
confidence: 99%
“…This kind of approaches aim to implicitly grasp the underlying degradation model through learning with external dataset. For dataset with paired HR-LR images, supervised learning with cautious design of SR network may be already enough to achieve satisfactory results, just like the top solutions proposed in NTIRE 2018 [71] and AIM 2020 [18] challenges. A more difficult setting is learning with unpaired data, where ground truth of LR images with realistic degradations are unavailable.…”
Section: Implicit Degradation Modelling 71 Learning Data Distribution...mentioning
confidence: 99%
“…In VCBP, the iterative error correction happens in the low-resolution space, and in each iteration, the reconstructed residual features are added to the HR encoded feature space. In VCBPv2 [21] the parameters are not shared within the modules, and the iterative error correction happens in both lowand high-resolution space. It follows the design of Haris et al [19] of using iterative up and downsampling layers to process the features in the Inner Loop.…”
Section: Related Workmentioning
confidence: 99%
“…Despite their success, the previously mentioned methods are trained with LR/HR image pairs with the bicubic downsampling and thus they have limited performance in the real-world settings. Recently, in the real-world SR challenge series [18]- [20], the authors have described the effects of bicubic downsampling. More recently, in [1], [2], the authors proposed GAN-based SR methods to solve the real-world SR problem.…”
Section: Related Workmentioning
confidence: 99%
“…For the training, we use DIV2K [22], Flickr2K [23], and RealSR [24] datasets that jointly contain 22,430 high-quality HR images for image restoration tasks with rich and diverse textures. We obtain the LR bicubic, LR bilinear, and LR nearest images by down-sampling HR images by the scaling factor ×4 of the DIV2K and Flickr2K datasets using the Pytorch bicubic, bilinear, and nearest kernel function.…”
Section: A Training Datamentioning
confidence: 99%