2015
DOI: 10.48550/arxiv.1511.04491
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deeply-Recursive Convolutional Network for Image Super-Resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(21 citation statements)
references
References 14 publications
0
21
0
Order By: Relevance
“…Parameter number PSNR SRCNN [20] 57,184 32.59 FSRCNN [44] 15,740 33.06 VDSR [42] 664,704 Methods and metrics: We compare our model with several recent state-of-the-art methods, including a three-layer CNN (SRCNN) [20], superresolution forest (SRF) [53], sparse coding-based network (SCN) [22], anchored neighborhood regression (A+) [23], shepard interpolation neural network (ShCNN) [21], very deep convolutional network (VDSR) [23], and fast convolutional network for SR (FSRCNN) [44]. For fair comparisons, we employ the popular PSNR and SSIM metrics for evaluation.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Parameter number PSNR SRCNN [20] 57,184 32.59 FSRCNN [44] 15,740 33.06 VDSR [42] 664,704 Methods and metrics: We compare our model with several recent state-of-the-art methods, including a three-layer CNN (SRCNN) [20], superresolution forest (SRF) [53], sparse coding-based network (SCN) [22], anchored neighborhood regression (A+) [23], shepard interpolation neural network (ShCNN) [21], very deep convolutional network (VDSR) [23], and fast convolutional network for SR (FSRCNN) [44]. For fair comparisons, we employ the popular PSNR and SSIM metrics for evaluation.…”
Section: Methodsmentioning
confidence: 99%
“…More recently, researchers notice the importance of image details and make various of attempts for exploration. Kim et al [23], [42] further improved the SR quality by different network architectures such as very deep and recursive network structures. However, these methods heavily rely on very deep networks with plenty of parameters.…”
Section: Deep Learning In Image Super-resolutionmentioning
confidence: 99%
“…Kim et al [13] proposed an image SR method using a Deeply-Recursive Convolutional Network (DRCN), which contains deep CNNs with up to 20 layers. Consequently, the model has a huge number of parameters.…”
Section: Super-resolution (Sr)mentioning
confidence: 99%
“…Nevertheless, that is not true in practice and training very deep NN is very challenging due to problems like vanishing gradients causing the NN to not be able to learn simple functions like the identity function between input and output [Sussillo and Abbott, 2015;Hochreiter et al, 2001]. The usual way to train DNNs is through residual blocks [He et al, 2015;Zagoruyko and Komodakis, 2017;Lim et al, 2017;Kim et al, 2016]. With residual blocks the NN itself can choose it's depth by skipping the training of a few layers using skip connections.…”
Section: Input-outputmentioning
confidence: 99%