2017
DOI: 10.1007/s11063-017-9660-0
|View full text |Cite
|
Sign up to set email alerts
|

Evolutionary Based Weight Decaying Method for Neural Network Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…The paper by Geman et al [51] as well the article by Hawkins [52] thoroughly discussed the topic of overfitting. Examples of techniques proposed to tackle this problem are the weight sharing methods [53,54], methods that reduce the number of parameters of the model (pruning methods) [55,56], weight elimination [57][58][59], weight decaying methods [60,61], dropout methods [62,63], the Sarprop method [64], and positive correlation methods [65]. Recently, a variety of papers have proposed methods to handle the overfitting problem in various cases, such as the usage of genetic algorithms for training data selection in RBF networks [66], the evolution of RBF models using genetic algorithms for rainfall prediction [67], and pruning decision trees using genetic algorithms [68,69].…”
mentioning
confidence: 99%
“…The paper by Geman et al [51] as well the article by Hawkins [52] thoroughly discussed the topic of overfitting. Examples of techniques proposed to tackle this problem are the weight sharing methods [53,54], methods that reduce the number of parameters of the model (pruning methods) [55,56], weight elimination [57][58][59], weight decaying methods [60,61], dropout methods [62,63], the Sarprop method [64], and positive correlation methods [65]. Recently, a variety of papers have proposed methods to handle the overfitting problem in various cases, such as the usage of genetic algorithms for training data selection in RBF networks [66], the evolution of RBF models using genetic algorithms for rainfall prediction [67], and pruning decision trees using genetic algorithms [68,69].…”
mentioning
confidence: 99%