2022
DOI: 10.1155/2022/9393589
|View full text |Cite
|
Sign up to set email alerts
|

Rendered Image Superresolution Reconstruction with Multichannel Feature Network

Abstract: In the process of film and television production, clear images can give the audience a real sensory experience, but high-resolution images require a massive amount of production time and highly specialized imaging equipment, which is not a cost-effective solution at the moment. To achieve a better cost efficiency during video production, we propose a multichannel featured superresolution network model that utilizes rendered low-resolution images according to their characteristics. This model includes a feature… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…In order to ensure the rapid convergence of the network, the gradient clip is gradually set within a certain range [−τ, τ], where τ is set to 0.4 [25][26][27]. The activation function and regularization operation follow each convolution layer, and the negative slope of the activation layer is 0.2 [28][29][30]. We trained 90 generations of deep learning according to the above-mentioned set procedure, and the entire process take about 75 h. We observed that the loss function of the 85th generation network model was the smallest and chose this model for subsequent experiment.…”
Section: Methodsmentioning
confidence: 99%
“…In order to ensure the rapid convergence of the network, the gradient clip is gradually set within a certain range [−τ, τ], where τ is set to 0.4 [25][26][27]. The activation function and regularization operation follow each convolution layer, and the negative slope of the activation layer is 0.2 [28][29][30]. We trained 90 generations of deep learning according to the above-mentioned set procedure, and the entire process take about 75 h. We observed that the loss function of the 85th generation network model was the smallest and chose this model for subsequent experiment.…”
Section: Methodsmentioning
confidence: 99%