2016
DOI: 10.1016/j.neucom.2016.02.046
|View full text |Cite
|
Sign up to set email alerts
|

Incorporating image priors with deep convolutional neural networks for image super-resolution

Abstract: Deep convolutional neural network has been applied for single image superresolution problem and demonstrated state-of-the-art quality. This paper presents several prior information that could be utilized during the training process of the deep convolutional neural network. The first type of prior focuses on edges and texture restoration in the output, and the second type of prior utilizes multiple upscaling factors to consider the structure recurrence across different scales. As demonstrated by our experimenta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 72 publications
(28 citation statements)
references
References 17 publications
0
28
0
Order By: Relevance
“…Another wok that exploits various image priors during the training phase of a deep CNN [50] is called SCRNN-Pr. One aspect of prior information focuses edge/texture restoration and the other concentrates on gradual upscaling via parallel structure recurrence.…”
Section: B State Of the Art Methods On Image Srmentioning
confidence: 99%
“…Another wok that exploits various image priors during the training phase of a deep CNN [50] is called SCRNN-Pr. One aspect of prior information focuses edge/texture restoration and the other concentrates on gradual upscaling via parallel structure recurrence.…”
Section: B State Of the Art Methods On Image Srmentioning
confidence: 99%
“…Benefiting from the powerful non-linear mapping, SRCNN (Dong et al, 2014(Dong et al, , 2016 improves the performance dramatically compared with the traditional SR methods. Since training SRCNN model usually takes a very long time before convergence, Liang et al (2016) introduce Sobel edge detection so as to capture gradient information to accelerate the training convergence.…”
Section: Overall Schemementioning
confidence: 99%
“…However, SRCNN ignores the image prior, which is a significant component for image recovery. Liang et al [34] introduce Sobel edge prior so as to capture gradient information to accelerate the training convergence. In fact, the method does reduce the training time but the resultant reconstruction enhancement is limited.…”
Section: Related Workmentioning
confidence: 99%