2019
DOI: 10.1137/17m1145409
|View full text |Cite
|
Sign up to set email alerts
|

Concentration of the Frobenius Norm of Generalized Matrix Inverses

Abstract: In many applications it is useful to replace the Moore-Penrose pseudoinverse (MPP) by a different generalized inverse with more favorable properties. We may want, for example, to have many zero entries, but without giving up too much of the stability of the MPP. One way to quantify stability is by how much the Frobenius norm of a generalized inverse exceeds that of the MPP. In this paper we derive finite-size concentration bounds for the Frobenius norm of p -minimal general inverses of iid Gaussian matrices, w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…This allows us to solve the class of problems in (2) that involve structured/group sparsity, namely those involving constraints on (projections onto) the mixed ∞,1 norm. The proximal mapping of the mixed 1,∞ norm is also applicable to the computation of minimax sparse pseudoinverses to underdetermined systems of linear equations [10], [11].…”
Section: Motivationmentioning
confidence: 99%
“…This allows us to solve the class of problems in (2) that involve structured/group sparsity, namely those involving constraints on (projections onto) the mixed ∞,1 norm. The proximal mapping of the mixed 1,∞ norm is also applicable to the computation of minimax sparse pseudoinverses to underdetermined systems of linear equations [10], [11].…”
Section: Motivationmentioning
confidence: 99%
“…Thus, the matrix inversion problem can be expressed as finding the best approximate solution W * to minimise C(W)=false‖boldIboldWboldXF2, $C(\mathbf{W})={{\Vert}\mathbf{I}-\mathbf{W}\mathbf{X}{{\Vert}}_{F}}^{2},$ where F ${{\Vert}\cdot {\Vert}}_{F}$ denotes the Frobenius norm. It is known that X + is the unique best approximate solution to the above equation, that is, W * = X + [34]. Obviously, with this transformation, we can relate the generalised inverse computation to FNNs.…”
Section: Methodsmentioning
confidence: 99%
“…where k⋅k F denotes the Frobenius norm. It is known that X + is the unique best approximate solution to the above equation, that is, W* = X + [34]. Obviously, with this transformation, we can relate the generalised inverse computation to FNNs.…”
Section: Constructing the Single Layer With Linear Neurons For Matrix...mentioning
confidence: 99%
“…It is rare in practice. However, some sources, e.g., [21][22][23][24], discuss this type from a mathematical point of view, because the two types of approximation (least square and minimum-norm) are applicable here, but with constraints.…”
Section: Connection With the Electrical Calculationmentioning
confidence: 99%