IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
DOI: 10.1109/ijcnn.1999.832591
|View full text |Cite
|
Sign up to set email alerts
|

Improved second-order training algorithms for globally and partially recurrent neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 13 publications
0
7
0
Order By: Relevance
“…43 An additional step of the computational analysis involved the application of the Friedman's test 26 to verify whether the obtained results could be considered significantly different. In all cases, the p-values achieved were close to zero, which indicates that the results were directly affected by the chosen predictor.…”
Section: Resultsmentioning
confidence: 99%
“…43 An additional step of the computational analysis involved the application of the Friedman's test 26 to verify whether the obtained results could be considered significantly different. In all cases, the p-values achieved were close to zero, which indicates that the results were directly affected by the chosen predictor.…”
Section: Resultsmentioning
confidence: 99%
“…Here, we use an MLP with three layers, with hyperbolic tangent and linear function as activation functions for the hidden and output layers, respectively. However, in this work the training process is performed using the Modified Scaled Conjugated Gradient [62].…”
Section: ) Multilayer Perceptron -Mlpmentioning
confidence: 99%
“…The MLP is trained via the Modified Scale Conjugated Gradient [62] algorithm using the following stopping conditions: (i) the maximum number of iterations equal 300; (ii) use of the hold-out cross-validation; (iii) progress training 10 6 . The number of hidden neurons for a single MLP model is defined using a grid search in the range [3,250].…”
Section: B Experimental Setupmentioning
confidence: 99%
“…We have adopted here a scaled conjugate gradient second-order method [26], used both for recurrent and nonrecurrent neural networks. This method is numerically stable, has a fast convergence and a low computational cost [27].…”
Section: B Training Processmentioning
confidence: 99%