2001
DOI: 10.1007/978-1-4613-0279-7_26
|View full text |Cite
|
Sign up to set email alerts
|

Supervised Training Using Global Search Methods

Abstract: Supervised learning in neural networks based on the popular backpropagation method can be often trapped in a local minimum of the error function. The class of backpropagation-type training algorithms includes local minimization methods that have n o m e c hanism that allows them to escape the in uence of a local minimum. The existence of local minima is due to the fact that the error function is the superposition of nonlinear activation functions that may h a ve minima at di erent points, which sometimes resul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2004
2004
2019
2019

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…Although intuitively it makes sense that accurate initial mappings would set learners on a fast track to vocabulary knowledge, this might not necessarily be the case. Indeed, studies in neural network learning suggest that under certain circumstances initial correct guesses can leave the system stranded in local minima from which it cannot easily escape (Gori & Tesi, ; Plagianakos, Magoulas, & Vrahatis, ). In addition, although correct initial biases in combination with supporting subsequent evidence could be sufficient for learning (e.g., Xu & Tenenbaum, ), learning may be more efficient with errors because errors clearly rule out alternatives.…”
Section: Introductionmentioning
confidence: 99%
“…Although intuitively it makes sense that accurate initial mappings would set learners on a fast track to vocabulary knowledge, this might not necessarily be the case. Indeed, studies in neural network learning suggest that under certain circumstances initial correct guesses can leave the system stranded in local minima from which it cannot easily escape (Gori & Tesi, ; Plagianakos, Magoulas, & Vrahatis, ). In addition, although correct initial biases in combination with supporting subsequent evidence could be sufficient for learning (e.g., Xu & Tenenbaum, ), learning may be more efficient with errors because errors clearly rule out alternatives.…”
Section: Introductionmentioning
confidence: 99%
“…One of the major drawbacks of BP algorithm is convergence to local minima [40]. However, the reality of local minima is a consequence of the fact that the error curvature is simply the superposition of non-linear activation functions that may exhibit local minima at different locations, which occasionally results in a non-convex curvature of the error cost function [41], [42]. One way to overcome this challenge is through the use of improved gradient-based algorithms that employ parameter adaptation strategies [43].…”
Section: Artificial Neural Network (Ann)mentioning
confidence: 99%
“…Herein, this algorithm includes local minimization methods that have no mechanism allowing them to escape from the influence of a local minimum [12]. At this point, optimization algorithms are added to NN for prevention of this problem.…”
Section: Introductionmentioning
confidence: 99%