2012
DOI: 10.3182/20120711-3-be-2027.00150
|View full text |Cite
|
Sign up to set email alerts
|

Convergence of Learning Algorithms in Neural Networks for Adaptive Identification of Nonlinearly Parameterized Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…It turned out that, at least, in the ideal case, the set , W * containing these s w * becomes not one-point [26,27] …”
Section: Preliminariesmentioning
confidence: 99%
See 2 more Smart Citations
“…It turned out that, at least, in the ideal case, the set , W * containing these s w * becomes not one-point [26,27] …”
Section: Preliminariesmentioning
confidence: 99%
“…This paper is an extension of [26,27]. The main efforts is focused on establishing sufficient conditions under which the global convergence of gradient algorithm for learning neural networks models in the stochastic environments is ensured.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This approach has been exploited by the authors in [24] to derive some local convergence in stochastic framework for standard online gradient algorithms with the constant learning rate.…”
Section: Introductionmentioning
confidence: 99%
“…Several of these authors assumed that training set must be finite whereas in online identification schemes, this set is theoretically infinite. Nevertheless, recently we observed a non-stochastic learning process when this procedure did not converge for certain infinite sequence of training examples [24].…”
Section: Introductionmentioning
confidence: 99%