2022
DOI: 10.3390/math10081208
|View full text |Cite
|
Sign up to set email alerts
|

Zeroing Neural Network for Pseudoinversion of an Arbitrary Time-Varying Matrix Based on Singular Value Decomposition

Abstract: Many researchers have investigated the time-varying (TV) matrix pseudoinverse problem in recent years, for its importance in addressing TV problems in science and engineering. In this paper, the problem of calculating the inverse or pseudoinverse of an arbitrary TV real matrix is considered and addressed using the singular value decomposition (SVD) and the zeroing neural network (ZNN) approaches. Since SVD is frequently used to compute the inverse or pseudoinverse of a matrix, this research proposes a new ZNN … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
10

Relationship

3
7

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…From Figure 1, it can be seen that the relative residual norm of CGS2,SCGS, GCGS CGS, and GCGS2 methods shows irregular convergence behavior, while the relative residual norm of the QMRGCGS2 method tends to show regular convergence behavior. Tat is to say, the QMRGCGS2 method has smoother convergence behavior than the CGS method and GCGS2 method [41][42][43]. Finally, CGS2 and CGS methods with good performance are selected to compare their performance with the proposed QMRGCGS2 method for practical application problems.…”
Section: Applicationmentioning
confidence: 99%
“…From Figure 1, it can be seen that the relative residual norm of CGS2,SCGS, GCGS CGS, and GCGS2 methods shows irregular convergence behavior, while the relative residual norm of the QMRGCGS2 method tends to show regular convergence behavior. Tat is to say, the QMRGCGS2 method has smoother convergence behavior than the CGS method and GCGS2 method [41][42][43]. Finally, CGS2 and CGS methods with good performance are selected to compare their performance with the proposed QMRGCGS2 method for practical application problems.…”
Section: Applicationmentioning
confidence: 99%
“…Today their use has expanded to include the resolution of generalized inversion issues, including time-varying Drazin inverse [33], time-varying ML-weighted pseudoinverse [34], time-varying outer inverse [35], time-varying pseudoinverse [36], and core and core-EP inverse [37]. Their use has expanded to include the resolution of linear programming tasks [38], quadratic programming tasks [39,40], systems of nonlinear equations [41,42], and systems of linear equations [43,44]. The creation of a ZNN model typically involves two fundamental steps.…”
Section: Introductionmentioning
confidence: 99%
“…To overcome this shortcoming of numerical methods, recursive neural networks (RNNs) were further designed and studied. At present, RNNs were universally applied in practical engineering and application problems [9][10][11][12][13][14][15][16][17][18][19][20][21]. Additionally, RNNs have the characteristics of parallel distributed processing; hence, they have been extensively employed for solving the time-dependent Lyapunov equation (TDLE) [22][23][24][25][26].…”
Section: Introductionmentioning
confidence: 99%