1993
DOI: 10.1117/12.152626
|View full text |Cite
|
Sign up to set email alerts
|

<title>Application of simulated annealing to the backpropagation model improves convergence</title>

Abstract: The Backpropagation technique for supervised learning of internal representations in multi4ayer artificial neural networks is an effective approach for solution of the gradient descent problem. However, as a primarily deterministic solution, it will attempt to take the best path to the nearest minimum, whether global or local. If a local minimum is reached, the network will fail to learn or will learn a poor approximation of the solution. This paper describes a novel approach to the Backpropagation model based… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2003
2003
2011
2011

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…the BPNN is prone to be trapped into a local minimum, global and stochastic search technologies, e.g. simulated annealing (Owen and Abunawass 1993), electromagnetism algorithm (Ma and Ji 1998), and GAs (Hong et al 2001, Zhao andHuang 2002), were introduced and proved to be useful in some specific applications.…”
Section: Adaptive Back-propagation Neural Networkmentioning
confidence: 99%
“…the BPNN is prone to be trapped into a local minimum, global and stochastic search technologies, e.g. simulated annealing (Owen and Abunawass 1993), electromagnetism algorithm (Ma and Ji 1998), and GAs (Hong et al 2001, Zhao andHuang 2002), were introduced and proved to be useful in some specific applications.…”
Section: Adaptive Back-propagation Neural Networkmentioning
confidence: 99%
“…To avoid the local-trap problem, simulated annealing (SA) (Kirkpatrick et al, 1983) has been employed by some authors to train neural networks. Amato et al (1991) and Owen & Abunawass (1993) show that for complex learning tasks, SA has a better chance to converge to a global minimum than have the gradientbased algorithms. Geman & Geman (1984) show that the global minimum can be reached by SA with probability 1 if the temperature decreases at a logarithmic rate of O(1/log t), where t denotes the number of iterations.…”
Section: Introductionmentioning
confidence: 99%
“…Amato, Apolloni, Caporali, Madesani, & Zanaboni (1991) and Owen and Abunawass (1993) used simulated annealing to improve convergence. This method could guarantee global minimum convergence if the cooling temperature is slow enough.…”
Section: Introductionmentioning
confidence: 99%