2023
DOI: 10.1109/tnnls.2021.3123533
|View full text |Cite
|
Sign up to set email alerts
|

Letter on Convergence of In-Parameter-Linear Nonlinear Neural Architectures With Gradient Learnings

Abstract: This letter summarizes and proves the concept of bounded-input bounded-state (BIBS) stability for weight convergence of a broad family of in-parameter-linear nonlinear neural architectures as it generally applies to a broad family of incremental gradient learning algorithms. A practical BIBS convergence condition results from the derived proofs for every individual learning point or batches for real-time applications.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…On the optimal information processing and distribution side, we utilize a class of shallow neural architectures that can be used as suitable learning predictors in the sense of fog computing [36]. These neural architectures are polynomial neural architectures, that is, Higher-Order Neural Units (HONU) [37], [38] with customized order of polynomial are a subclass of In-Parameter-Linear-Nonlinear Architectures (IPLNAs) [38] that have intriguing properties regarding computational efficiency and simultaneous weight convergence assurance [39] and stability monitoring of the underlying system represented by the training data. Recall that HONUs can also be viewed as energy-saving computing machine learning tools due to their properties and abilities to run efficiently on low power consumption devices.…”
Section: Requirementsmentioning
confidence: 99%
See 4 more Smart Citations
“…On the optimal information processing and distribution side, we utilize a class of shallow neural architectures that can be used as suitable learning predictors in the sense of fog computing [36]. These neural architectures are polynomial neural architectures, that is, Higher-Order Neural Units (HONU) [37], [38] with customized order of polynomial are a subclass of In-Parameter-Linear-Nonlinear Architectures (IPLNAs) [38] that have intriguing properties regarding computational efficiency and simultaneous weight convergence assurance [39] and stability monitoring of the underlying system represented by the training data. Recall that HONUs can also be viewed as energy-saving computing machine learning tools due to their properties and abilities to run efficiently on low power consumption devices.…”
Section: Requirementsmentioning
confidence: 99%
“…where µ is the learning rate that can vary over time depending on the applied form of learning, and Q is the error criterion Q(k) = e(k) 2 . It is shown in [39] that for IPLNAs, including HONUs, the weight-update system in (7) can be expressed in the linear time-variant state-space representation form.…”
Section: Polynomial Neural Architectures For Efficient Fog Computingmentioning
confidence: 99%
See 3 more Smart Citations