2023
DOI: 10.1016/j.ins.2022.11.157
|View full text |Cite
|
Sign up to set email alerts
|

An adaptive gradient-descent-based neural networks for the on-line solution of linear time variant equations and its applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…Thus, g is the derivative of the activation function; if g > 1, the final derived gradient update increases exponentially as the number of layers increases, that is, a gradient explosion occurs, and if g < 1, then the derived gradient update information decays exponentially as the number of layers increases, that is, a gradient disappearance occurs [16].…”
Section: Theoretical Analysis Of Data Collapse Phenomenon Based On Gr...mentioning
confidence: 99%
“…Thus, g is the derivative of the activation function; if g > 1, the final derived gradient update increases exponentially as the number of layers increases, that is, a gradient explosion occurs, and if g < 1, then the derived gradient update information decays exponentially as the number of layers increases, that is, a gradient disappearance occurs [16].…”
Section: Theoretical Analysis Of Data Collapse Phenomenon Based On Gr...mentioning
confidence: 99%
“…Typically, due to the use of fixed sampling periods and fixed convergence factors in the conventional DRNN algorithms mentioned above, it is difficult for them to achieve a balance in computational precision and convergence rate, resulting in limited algorithmic dynamic and convergence performance. Therefore, some researchers have tried to introduce various adaptive mechanisms into model/algorithm design (Song et al, 2008 ; Yang M. et al, 2020 ; Dai et al, 2022 ; Cai and Yi, 2023 ). For example, Yang M. et al ( 2020 ) proposed two discretized RNN algorithms with an adaptive Jacobian matrix.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Yang M. et al ( 2020 ) proposed two discretized RNN algorithms with an adaptive Jacobian matrix. Cai and Yi ( 2023 ) developed an adaptive gradient-descent-based RNN model to solve time-variant problems based on the Lyapunov theory. Dai et al ( 2022 ) proposed a hybrid RNN model by introducing a fuzzy adaptive control strategy to generate a fuzzy adaptive factor that can change its size adaptively according to the RE.…”
Section: Introductionmentioning
confidence: 99%