2017
DOI: 10.1109/tnnls.2016.2572310
|View full text |Cite
|
Sign up to set email alerts
|

An Approach to Stable Gradient-Descent Adaptation of Higher Order Neural Units

Abstract: Abstract-Stability evaluation of a weight-update system of higher-order neural units (HONUs) with polynomial aggregation of neural inputs (also known as classes of polynomial neural networks) for adaptation of both feedforward and recurrent HONUs by a gradient descent method is introduced. An essential core of the approach is based on spectral radius of a weight-update system, and it allows stability monitoring and its maintenance at every adaptation step individually. Assuring stability of the weight-update s… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 48 publications
0
13
0
Order By: Relevance
“…Thus, the previous sections thoroughly prove that the sufficient condition to maintain the learning convergence under ISS's umbrella and in the sense of BIBS stability is (20) that yields (21) for non-momentum gradient algorithms. For experimental results, e.g., with a class of polynomial neural architectures, please see paper [10], where spectral radii's effect ( ) A  on gradient learning is studied and shown in detail. Also, this letter's proofs are the theoretical complements to the earlier, i.e., more experimentally focused work [10] that did not explicitly show the whole theoretical kinship.…”
Section: Consequences To Real Applicationsmentioning
confidence: 99%
See 3 more Smart Citations
“…Thus, the previous sections thoroughly prove that the sufficient condition to maintain the learning convergence under ISS's umbrella and in the sense of BIBS stability is (20) that yields (21) for non-momentum gradient algorithms. For experimental results, e.g., with a class of polynomial neural architectures, please see paper [10], where spectral radii's effect ( ) A  on gradient learning is studied and shown in detail. Also, this letter's proofs are the theoretical complements to the earlier, i.e., more experimentally focused work [10] that did not explicitly show the whole theoretical kinship.…”
Section: Consequences To Real Applicationsmentioning
confidence: 99%
“…For experimental results, e.g., with a class of polynomial neural architectures, please see paper [10], where spectral radii's effect ( ) A  on gradient learning is studied and shown in detail. Also, this letter's proofs are the theoretical complements to the earlier, i.e., more experimentally focused work [10] that did not explicitly show the whole theoretical kinship. The BIBS condition (20) can be restated as…”
Section: Consequences To Real Applicationsmentioning
confidence: 99%
See 2 more Smart Citations
“…The gradient based optimization algorithms, as the Gradient Descent, are usually used for the HONU training. [11] After the first part, where the data for driving conditions evaluation are prepared, the second part comes, where the in first part prepared vector of HONU weights is fed to suitable clustering algorithm for clusters training or evaluation. The most widespread clustering algorithms are based on the K-means algorithm [12].…”
Section: Current Driving Conditions Evaluation Via Neural Networkmentioning
confidence: 99%