2019
DOI: 10.1016/j.amc.2019.02.071
|View full text |Cite
|
Sign up to set email alerts
|

Face recognition via Deep Stacked Denoising Sparse Autoencoders (DSDSA)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
22
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(22 citation statements)
references
References 28 publications
0
22
0
Order By: Relevance
“…Table 4 showed our results for three configurations of (P, R), respectively, suggesting that our WSBP and CLBP S M(m 1 , μ 2 ) achieved the highest recognition rate. Table 5 compared our results with a deep learning approach based on Deep Stack Denoising Sparse Autoencoders (DSDSA) [13]. As can be seen from these tables, even using one single small scale (P, R) = (4, 2), our descriptor WSBP (Ours 1), CLBP S M(m 1 , μ 2 ) reached the recognition rate of 98.83%, 98.96%, respectively, which were greater than the performance of DSDSA.…”
Section: Results With Caltech 1999 and Kdef Datasetsmentioning
confidence: 99%
“…Table 4 showed our results for three configurations of (P, R), respectively, suggesting that our WSBP and CLBP S M(m 1 , μ 2 ) achieved the highest recognition rate. Table 5 compared our results with a deep learning approach based on Deep Stack Denoising Sparse Autoencoders (DSDSA) [13]. As can be seen from these tables, even using one single small scale (P, R) = (4, 2), our descriptor WSBP (Ours 1), CLBP S M(m 1 , μ 2 ) reached the recognition rate of 98.83%, 98.96%, respectively, which were greater than the performance of DSDSA.…”
Section: Results With Caltech 1999 and Kdef Datasetsmentioning
confidence: 99%
“…Autoencoder (AE) layer is a specific type of artificial neural network composed of an encoder and a decoder. It is used to extract complex feature from data complex and to generate new data [26] with as much accuracy and low error as possible. The output of the first layer AE is taken as the input of the following layer.…”
Section: Deep Network Using Autoencoder (Dn-ae)mentioning
confidence: 99%
“…Deep autoencoders have been widely used in pre-trained deep neural networks, such as in coevolution neural networks (CNNs) and recurrent neural networks (RNNs), and reported as a useful way to represent the original variables. In addition, deep autoencoders have been successfully used in the fields of speech recognition [31], 3D shape retrieval [32], face recognition [33][34][35], and more. Next, we introduce the proposed algorithm.…”
Section: Deep Autoencodermentioning
confidence: 99%