2019
DOI: 10.1007/s11042-019-7544-1
|View full text |Cite
|
Sign up to set email alerts
|

Super-resolution Reconstruction Using Multiconnection Deep Residual Network Combined an Improved Loss Function for Single-frame Image

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…LSTM units in the hidden layer are fully connected through circular connection, as shown in Figure 3. Since curve fitting is a regression problem, MSE is used in the loss function [18].…”
Section: Analysis Of Normal Power Consumption Prediction Effectmentioning
confidence: 99%
“…LSTM units in the hidden layer are fully connected through circular connection, as shown in Figure 3. Since curve fitting is a regression problem, MSE is used in the loss function [18].…”
Section: Analysis Of Normal Power Consumption Prediction Effectmentioning
confidence: 99%
“…Various network structures have been developed, such as deep network with residual learning [14], Laplace pyramid structure [15], residual block [21], and residual dense network [22]. Besides supervised learning and unsupervised learning [23][24][25][26], reinforcement learning [27][28][29] are also introduced to solve the problem of image super-resolution in recent years. Specifically, the literature [30] has a systematic description of image super resolution.…”
Section: Deep Learning For Image Super-resolutionmentioning
confidence: 99%
“…Compared with many traditional SISR methods based on machine learning, the simple structure of the SRCNN model shows remarkable performance in image super-resolution problems. Then, a large number of CNN-based models were proposed to obtain more accurate SISR results using different techniques to improve the quality of the reconstructed image: the design of the network structure with residuals [18][19][20][21][22]; generative adversarial networks [23]; neural architecture search [24,25]; various attention mechanisms [26], and other technologies [27,28]. With the improvement of architecture, this field has indeed made rich progress.…”
Section: Introductionmentioning
confidence: 99%