2013
DOI: 10.5539/cis.v6n2p38
|View full text |Cite
|
Sign up to set email alerts
|

Setting the Hidden Layer Neuron Number in Feedforward Neural Network for an Image Recognition Problem under Gaussian Noise of Distortion

Abstract: There is considered an image recognition problem, defined for the single hidden layer perceptron, fed with 5-by-7 monochrome images on its input under Gaussian noise of their distortion. In this neural network the hidden layer neuron number should be set optimally to maximize its productivity. For minimizing traintime duration and recognition error rate both simultaneously there are suggested two ways of solving the corresponding two-objective minimization problem. One of them deals with equilibrium conception… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2014
2014
2017
2017

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 25 publications
0
10
0
Order By: Relevance
“…The correction is in importing a share of pixel distortion into DSS. This makes training sets milder for 2LP whose performance under normally distributed feature distortions (NDFD) is excellent [18], [19]. PSSDR r defines the share of NDFD.…”
Section: Training Parametersmentioning
confidence: 99%
See 1 more Smart Citation
“…The correction is in importing a share of pixel distortion into DSS. This makes training sets milder for 2LP whose performance under normally distributed feature distortions (NDFD) is excellent [18], [19]. PSSDR r defines the share of NDFD.…”
Section: Training Parametersmentioning
confidence: 99%
“…Advantages of 2LP are simplicity and high speed of classification [12], [15], [16]. Quick resetting and resource-saving by low CEP are believed to be reachable if 2LP is optimised by its hidden layer neuron number (HLNN) along with the training parameters [12], [17], [18]. The criterion of optimisation is minimisation of CEP, which depends on the training parameters.…”
Section: Related Workmentioning
confidence: 99%
“…Let the usefulness of 2LP (16) during the training be measured with its performance function "mse" according to the sum of squared errors [8], [9], [22], [23]. Finally, having preset the minimum performance gradient to 6 10  , let the number of epochs be 5000 in order to prevent long-dragging convergence of TP and to shorten the ultimate TP period for each pass.…”
Section: Description Of 2lp (6) Configuration For M6080ismentioning
confidence: 99%
“…Before training on STSM6080I NDPD from TS (2), 2LP (6) ( 1 5 ) is initialized with MATLAB function "feedforwardnet" from the Neural Network Toolbox [4], [9], [13] Such a transfer within the Neural Network Toolbox is implemented with logarithmic sigmoid MATLAB transfer function "logsig" , which has the derivative due to its smooth "S-shape" [7], [16], [17]. Then 2LP (15) , (16) having been adapted with weight and bias learning rules by Neural Network Toolbox adapt function "adaptwb", is going to be trained with MATLAB backpropagation training function "traingda" (see its usage in [3], [4], [7], [9], [12], [18], [19]). Function "traingda" will update 4800 300 300 26 300 26 1 448 126…”
Section: Description Of 2lp (6) Configuration For M6080ismentioning
confidence: 99%
See 1 more Smart Citation