2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018
DOI: 10.1109/cvprw.2018.00125
|View full text |Cite
|
Sign up to set email alerts
|

Persistent Memory Residual Network for Single Image Super Resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 16 publications
0
8
0
Order By: Relevance
“…Batch normalization is a technique that performs well in computer vision tasks and reduces the overall runtime of the training, and enhances the performance; however, in SR batch normalization proved to be sub-optimal ( Lim et al, 2017 ; Wang et al, 2018a ; Chen et al, 2018a ). In this regard, normalization techniques for super-resolution should be explored further.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Batch normalization is a technique that performs well in computer vision tasks and reduces the overall runtime of the training, and enhances the performance; however, in SR batch normalization proved to be sub-optimal ( Lim et al, 2017 ; Wang et al, 2018a ; Chen et al, 2018a ). In this regard, normalization techniques for super-resolution should be explored further.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
“…Thus, there is a lack of flexibility in the network; hence, Lim et al (2017) removed batch normalization and used the additional memory to design a large model with superior performance compared to the BN-based network. Other studies Wang et al (2019c) , Wang et al (2018a) and Chen et al (2018a) also implemented this technique to achieve marginally better performance.…”
Section: Supervised Super-resolutionmentioning
confidence: 99%
“…In each group, it is further split into models trained using L2 loss function and 9 L1 loss function. SR models in the 'Bicubic' group include the SRCNN [28], VDSR [29], DRRN [54], MemNet [55], DnCNN [56], LapSRN [38], ZSSR [57], CARN [58], SRRAM [59]. In the 'Learned' group, we compare the CAR model with two recent state-of-the-art image downscaling models trained jointly with deep SR models, i.e., the CNN-CR→CNN-SR [4] model and the TAD→TAU model [21].…”
Section: Comparison With Public Benchmark Sr Resultsmentioning
confidence: 99%
“…In Generator, LR images are input to the network followed by one Conv layer for extracting shallow features. Then four memory residual (MR) blocks are applied for improving image quality which help to form persistent memory and improve the feature selection ability of model like MemEDSR [26]. Each MR block consists of four ResBlocks and a gate unit.…”
Section: Mr-srganmentioning
confidence: 99%