2015
DOI: 10.1142/s1469026815500145
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional Neural Networks with Fused Layers Applied to Face Recognition

Abstract: In this paper, we propose an e®ective convolutional neural network (CNN) model to the problem of face recognition. The proposed CNN architecture applies fused convolution/ subsampling layers that result in a simpler model with fewer network parameters; that is, a smaller number of neurons, trainable parameters, and connections. In addition, it does not require any complex or costly image preprocessing steps that are typical in existing face recognizer systems. In this work, we enhance the stochastic diagonal L… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 8 publications
0
14
0
1
Order By: Relevance
“…A mini-batch is defined as a subset of the training set that is used to evaluate the gradient of the loss function and update the weights. The gradient descent algorithm updates the network parameters (weights and biases) [50].…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…A mini-batch is defined as a subset of the training set that is used to evaluate the gradient of the loss function and update the weights. The gradient descent algorithm updates the network parameters (weights and biases) [50].…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…According to Figure 1, it is illustrated the flow chart of overall LPR system phases. The detail algorithm of enhanced SDLM can be referred in [35]. MATLAB and C language has been used as the platform.…”
Section: Lpr System Flowchartmentioning
confidence: 99%
“…Therefore, Gaussian weight has been used for the rest of the testing. The learning algorithm used in training the CNN is an enhanced version of Stochastic Diagonal Levenberg Marquadt (SDLM) [35] algorithm. This learning algorithm is better than standard backpropagation in which it avoids the gradient from getting trap into the local minima.…”
Section: Type Of Weight Selectionmentioning
confidence: 99%
“…The seven classes represented by seven full connected layer neurons. To accelerate the convergence process and maintaining the generalization ability, the Stochastic Diagonal Levenberg Marquadt (SDLM) learning algorithm [9] was used.…”
Section: Cnn Designmentioning
confidence: 99%