2022
DOI: 10.48550/arxiv.2201.11279
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Revisiting RCAN: Improved Training for Image Super-Resolution

Abstract: Image super-resolution (SR) is a fast-moving field with novel architectures attracting the spotlight. However, most SR models were optimized with dated training strategies. In this work, we revisit the popular RCAN model and examine the effect of different training options in SR. Surprisingly (or perhaps as expected), we show that RCAN can outperform or match nearly all the CNN-based SR architectures published after RCAN on standard benchmarks with a proper training strategy and minimal architecture change. Be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(18 citation statements)
references
References 32 publications
0
18
0
Order By: Relevance
“…Tab. 6 shows the quantitative comparison of our approach and the state-of-the-art methods: EDSR [30], RCAN [65], SAN [7], IGNN [69], HAN [41], NLSN [40], RCAN-it [32], as well as approaches using Ima-geNet pre-training, i.e., IPT [5] and EDT [26]. We can see that our approach outperforms the other methods significantly on all benchmark datasets.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 96%
See 1 more Smart Citation
“…Tab. 6 shows the quantitative comparison of our approach and the state-of-the-art methods: EDSR [30], RCAN [65], SAN [7], IGNN [69], HAN [41], NLSN [40], RCAN-it [32], as well as approaches using Ima-geNet pre-training, i.e., IPT [5] and EDT [26]. We can see that our approach outperforms the other methods significantly on all benchmark datasets.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 96%
“…We use DF2K dataset (DIV2K [31]+Flicker2K [47]) as the original training dataset, following the latest publications [29,32]. When utilizing pre-training, we adopt ImageNet [8] following [5,26].…”
Section: Methodsmentioning
confidence: 99%
“…To be specific, since the modules of feature extracting and upsampling are far weaker than that of feature propagation, they increase the number of residual blocks in these two modules. Besides, enhanced activation functions (e.g., SiLU [17]) are proven to be effective according to [40]. For the sake of training efficacy, they replace Leaky ReLU in BasicVSR++ with PReLU [22] and verify its effectiveness.…”
Section: Gy-lab Teammentioning
confidence: 99%
“…Figure 3 shows the pipeline of the proposed method. Inspired by [40], an enlarged model combined with proper training strategies is expected to have noticeable improvements over the baseline. On the one hand, they perform two modifications on BasicVSR++ to improve its capacity.…”
Section: Gy-lab Teammentioning
confidence: 99%
See 1 more Smart Citation