2020
DOI: 10.37188/yjyxs20203512.1299
|View full text |Cite
|
Sign up to set email alerts
|

Expression recognition based on residual rectifier enhanced convolution neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…The outstanding performance of sparse coding algorithm in image denoising, clustering, deblur and classification has attracted more researchers [28]. Our research group and related research teams have made remarkable achievements in expression recognition with sparse coding as a classification method [29]. The sparse representation uses a linear dictionary sequence of the high-level form as the input signal, and its dictionary learning ability is designed to infer the [30].…”
Section: Deep Sparse Convolutional Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…The outstanding performance of sparse coding algorithm in image denoising, clustering, deblur and classification has attracted more researchers [28]. Our research group and related research teams have made remarkable achievements in expression recognition with sparse coding as a classification method [29]. The sparse representation uses a linear dictionary sequence of the high-level form as the input signal, and its dictionary learning ability is designed to infer the [30].…”
Section: Deep Sparse Convolutional Neural Networkmentioning
confidence: 99%
“…Our research group and related research teams have made remarkable achievements in expression recognition with sparse coding as a classification method [29]. The sparse representation uses a linear dictionary sequence of the high-level form as the input signal, and its dictionary learning ability is designed to infer the [30]. The performance of a deep network increases with the increase of the depth and width of the network, but the disadvantage is that a large number of parameters need to be adapted.…”
Section: Deep Sparse Convolutional Neural Networkmentioning
confidence: 99%