2002
DOI: 10.1007/3-540-46014-4_27
|View full text |Cite
|
Sign up to set email alerts
|

Artificial Neural Network Learning: A Comparative Review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0
3

Year Published

2004
2004
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(19 citation statements)
references
References 31 publications
0
16
0
3
Order By: Relevance
“…Then, all the weights in the net are adjusted slightly in the direction that would bring the output values of the net closer to the values for the desired output. There are several algorithms with which a network can be trained (Neocleous and Schizas 2002 • η is a positive number (called learning rate), which determines the step size in the gradient descent search. A large value enables back propagation to move faster to the target weight configuration but it also increases the chance of its never reaching this target.…”
Section: Neural Networkmentioning
confidence: 99%
“…Then, all the weights in the net are adjusted slightly in the direction that would bring the output values of the net closer to the values for the desired output. There are several algorithms with which a network can be trained (Neocleous and Schizas 2002 • η is a positive number (called learning rate), which determines the step size in the gradient descent search. A large value enables back propagation to move faster to the target weight configuration but it also increases the chance of its never reaching this target.…”
Section: Neural Networkmentioning
confidence: 99%
“…More recently, many new training algorithms have been proposed to overcome the drawbacks of traditional neural networks and to increase their reliability (Bianchini and Gori, 1996;Neocleous and Schizas, 2002). In this paradigm, one of the significant developments is a class of kernel based neural networks called Support Vector Machines (SVMs), the principle of which is rooted in the statistical learning theory and method of structural risk minimization (Haykin, 2003).…”
Section: Support Vector Machinementioning
confidence: 99%
“…There are several algorithms by which a network can be trained [17], but the most popular algorithm is back propagation (BP) algorithm. The back propagation algorithm will perform a number of weight modification before it concludes with a good weight configuration for n training instances and w weights each epoch in learning takes O(nw) time.…”
Section: Perceptron Based Techniquesmentioning
confidence: 99%