2014 IEEE International Conference on Computational Intelligence and Computing Research 2014
DOI: 10.1109/iccic.2014.7238334
|View full text |Cite
|
Sign up to set email alerts
|

Multi-layer perceptron (MLP) neural network technique for offline handwritten Gurmukhi character recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 46 publications
(15 citation statements)
references
References 5 publications
0
15
0
Order By: Relevance
“…Set f (·) as the activation function and x n ∈R 1*n , w n ∈R n*1 , b∈R q*n . Thus, the forward propagation mode from the input layer to the output layer is [30]…”
Section: Mlp Model Establishmentmentioning
confidence: 99%
“…Set f (·) as the activation function and x n ∈R 1*n , w n ∈R n*1 , b∈R q*n . Thus, the forward propagation mode from the input layer to the output layer is [30]…”
Section: Mlp Model Establishmentmentioning
confidence: 99%
“…The architecture of CNN(Convolution Neural Network) is designed using multi layer feed forward network, which is tailored to minimize the sensitivity, translation, distortions and rotations of the input images [14]. Insensitivity toward nearby changes is incorporated with the system engineering by compelling arrangements of every units situated at better places to utilize indistinguishable weight vectors, in this way constraining them to distinguish a similar component on various pieces of the input [1].…”
Section: B Convolution Neural Networkmentioning
confidence: 99%
“…Other layers work to map inputs to outputs by performing construction of the inputs with the node's weights by applying activation function. Logistic and hyperbolic tangent sigmoid functions are the most common activation function in MLPNN [12].…”
Section: Multilayer Perceptron Neural Network (Mlpnn)mentioning
confidence: 99%