2007
DOI: 10.1016/j.autcon.2006.12.007
|View full text |Cite
|
Sign up to set email alerts
|

A new interactive model for improving the learning performance of back propagation neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0
1

Year Published

2008
2008
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 20 publications
0
5
0
1
Order By: Relevance
“…Many different types of methods were developed to overcome the local‐minimum problem of the backpropagation algorithm. One obvious approach is to concentrate on optimization of learning rates or step sizes30–33 and the employment of various minimization methodologies such as conjugate gradient,34, 35 Levenberg‐Marguardt algorithm,36, 37 stochastic backpropagation,38 genetic algorithm,39, 40 simulated annealing,41, 42 or a hybrid of optimization methods 43. The second approach focuses on optimizing network architecture during training by employing genetic algorithm,44–47 self‐organized network,48 or fuzzy logic 49–51.…”
Section: Introductionmentioning
confidence: 99%
“…Many different types of methods were developed to overcome the local‐minimum problem of the backpropagation algorithm. One obvious approach is to concentrate on optimization of learning rates or step sizes30–33 and the employment of various minimization methodologies such as conjugate gradient,34, 35 Levenberg‐Marguardt algorithm,36, 37 stochastic backpropagation,38 genetic algorithm,39, 40 simulated annealing,41, 42 or a hybrid of optimization methods 43. The second approach focuses on optimizing network architecture during training by employing genetic algorithm,44–47 self‐organized network,48 or fuzzy logic 49–51.…”
Section: Introductionmentioning
confidence: 99%
“…To address these problems effectively, many improved approaches have been proposed. For example, approaches such as too early saturation avoidance, weight adjusting, normalisation, interaction, learning rate changing and stimulation function changing (Yu and Chen 1997;Yam and Chow 2000;Trentin 2001;Magoulas, Plagiananakos, and Vrahatis 2002;Eom, Jung, and Sirisena 2003;Zweiri, Whidborne, and Seneviratne 2003;Yan and Jue 2004;Zhang 2006Zhang , 2008Wang, Kao, and Lee 2007). All these approaches have made some improvement to the performance of BP.…”
Section: Introductionmentioning
confidence: 93%
“…There are basically on selection of dynamic variation of learning rate and momentum, selection of better activation function and better cost function. In 2007, Wang et al [20] has proposed the individual inference adjusting learning rate technique (IIALR) to enhance the learning performance of the BP network. The mechanism of the weight adjustment in the IIALR is an individual learning rate for each weight.…”
Section: Research Trends Of Bp Learningmentioning
confidence: 99%