1990 IJCNN International Joint Conference on Neural Networks 1990
DOI: 10.1109/ijcnn.1990.137819
|View full text |Cite
|
Sign up to set email alerts
|

Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
523
0
29

Year Published

2002
2002
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 1,044 publications
(554 citation statements)
references
References 2 publications
2
523
0
29
Order By: Relevance
“…We cannot guarantee that these results are optimal for every considered dataset. However, the resulting intervals roughly confirm the findings for the weight intervals reported in Nguyen & Widrow (1990) and Thimm & Fiesler (1994). To the best of our knowledge there is no specific study on this matter in the literature and in light of these results this should constitute an interesting point for deeper investigation.…”
Section: Initializing Hidden-to-hidden and Hidden-to-output Layer Consupporting
confidence: 85%
See 2 more Smart Citations
“…We cannot guarantee that these results are optimal for every considered dataset. However, the resulting intervals roughly confirm the findings for the weight intervals reported in Nguyen & Widrow (1990) and Thimm & Fiesler (1994). To the best of our knowledge there is no specific study on this matter in the literature and in light of these results this should constitute an interesting point for deeper investigation.…”
Section: Initializing Hidden-to-hidden and Hidden-to-output Layer Consupporting
confidence: 85%
“…A number of approaches such as those presented in this section claim the reputation to provide improvement in BP convergence speed and avoidance of bad local minima, (Nguyen & Widrow, 1990;Wessels & Barnard, 1992).…”
Section: Random Selection Of Initial Weightsmentioning
confidence: 99%
See 1 more Smart Citation
“…Before training our ANN models, their weights and biases were initialized according to the Nguyen-Widrow initialization algorithm [32]. The optimization of a network can be accomplished by changing the network parameters (such as the number of neurons, number of hidden layers and number of epochs) one at a time.…”
Section: Artificial Neural Network Modeling and Performance Measuresmentioning
confidence: 99%
“…The parameters of the network are initialised according to the Nguyen-Widrow method [19] and a 1 and a 2 are initially set to one and zero, respectively. It should be noted that the objective function E is adapted at each iteration since the hyperparameters a i are re-estimated.…”
Section: Methodsmentioning
confidence: 99%