2008
DOI: 10.13001/1081-3810.1287
|View full text |Cite
|
Sign up to set email alerts
|

Fast computing of the Moore-Penrose inverse matrix

Abstract: Abstract. In this article a fast computational method is provided in order to calculate the Moore-Penrose inverse of full rank m × n matrices and of square matrices with at least one zero row or column. Sufficient conditions are also given for special type products of square matrices so that the reverse order law for the Moore-Penrose inverse is satisfied.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
45
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 35 publications
(45 citation statements)
references
References 6 publications
0
45
0
Order By: Relevance
“…They are available in standard libraries, such as LAPACK and Matlab. Nevertheless this area of research is very active and some authors propose methods of computing A † based on the formula A † = (A T A) −1 A T (see, e.g., [6,[16][17][18][19]), where the matrix A T A is formed explicitly. This algorithm may be fast, compared to the SVD decomposition, it is well known however, that for ill-conditioned matrices this approach may lead to severe loss of accuracy.…”
mentioning
confidence: 99%
“…They are available in standard libraries, such as LAPACK and Matlab. Nevertheless this area of research is very active and some authors propose methods of computing A † based on the formula A † = (A T A) −1 A T (see, e.g., [6,[16][17][18][19]), where the matrix A T A is formed explicitly. This algorithm may be fast, compared to the SVD decomposition, it is well known however, that for ill-conditioned matrices this approach may lead to severe loss of accuracy.…”
mentioning
confidence: 99%
“…In [9], based on Theorem 1, the authors developed an algorithm (the ginv function) for computing the generalized inverse of full rank matrices, and of square matrices with at least one zero row or column and the rest of the matrix full rank. In other words, our main concern was to calculate the corresponding k i in the expansion Table 11 Error and Computational Time Results; matrix market sparse matrices.…”
Section: The Computational Methodsmentioning
confidence: 99%
“…In a recent article [9], the first two authors provided a new method for the fast computation of the generalized inverse of full rank rectangular matrices and of square matrices with at least one zero row or column. In order to reach this goal, a special type of tensor product of two vectors was used, usually defined in infinite dimensional Hilbert spaces.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The topic of ELMs is subject to discussion as they are considered similar to, or special cases of, radial basis function (RBF) networks, random vector functional-link (RVFL), least squares support vector machines (LS-SVM), or reduced SVM [11] [12] [13]. In this paper, networks constructed as indicated above, were trained using different approaches for the computation of the Moore-Penrose pseudoinverse (using Singular Value Decomposition and the algorithms described in [14], [15]). …”
Section: B Extreme Learning Machinesmentioning
confidence: 99%