2019
DOI: 10.3233/jifs-190017
|View full text |Cite
|
Sign up to set email alerts
|

Residual learning of deep convolutional neural networks for image denoising

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…Shan et al used feature-based technology to extract the feature size required for clothing from 3D human model data. A 3D human body modeling and clothing has always been a hot and difficult point in the field of computer graphics and CAD clothing [9]. For a long time, research in this field mainly formed the following modeling methods: use points, lines, and curves to build a 3D wireframe model, and use voxels to build a 3D solid model; use points, edges, and surfaces to build a 3D surface model, using the mesh facet method; and the 3D surface model is built on the basis of 3D physical modeling.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Shan et al used feature-based technology to extract the feature size required for clothing from 3D human model data. A 3D human body modeling and clothing has always been a hot and difficult point in the field of computer graphics and CAD clothing [9]. For a long time, research in this field mainly formed the following modeling methods: use points, lines, and curves to build a 3D wireframe model, and use voxels to build a 3D solid model; use points, edges, and surfaces to build a 3D surface model, using the mesh facet method; and the 3D surface model is built on the basis of 3D physical modeling.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The image features are then described to obtain a statistical histogram, and with intersection kernel SVM, a linear combination is made between the probabilities of occurrence of different angles, and the highest probability value after the combination is taken as the recognition result [20], and the algorithmic procedure is shown in…”
Section: Feature Recognition and Correctionmentioning
confidence: 99%
“…In equation (20), ðς 1 , ς 2 , ⋯, ς η Þ and ðς 1 ′ , ς 2 ′ , ⋯, ς η ′ Þ represent the eigenvectors of the two feature points.…”
Section: Feature Recognition and Correctionmentioning
confidence: 99%
See 1 more Smart Citation
“…(b). e details of the proposed ResNet residual neural network are introduced as follows[32,37]. Input layer: the inputs of the ResNet are color images of the real scenes, and the size of the input image is 224 * 224 * 3.…”
mentioning
confidence: 99%