2015
DOI: 10.1145/2733693.2733713
|View full text |Cite
|
Sign up to set email alerts
|

A Relaxed Algorithm for Online Matrix Inversion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 6 publications
0
5
0
Order By: Relevance
“…We also show how to reproduce the result on "online linear systems" by Storjohann and Yang [SY15] via Theorem 1.1 and existing dynamic matrix inverse data structures. This again shows that many different results can be reproduced by reducing to dynamic matrix inverse, reducing the overall amount of proofs required in the area.…”
Section: Reproducing and Simplifying Further Resultsmentioning
confidence: 85%
See 1 more Smart Citation
“…We also show how to reproduce the result on "online linear systems" by Storjohann and Yang [SY15] via Theorem 1.1 and existing dynamic matrix inverse data structures. This again shows that many different results can be reproduced by reducing to dynamic matrix inverse, reducing the overall amount of proofs required in the area.…”
Section: Reproducing and Simplifying Further Resultsmentioning
confidence: 85%
“…Storjohann and Yang [SY15] have given an O(n ω ) time algorithm for this problem. Here we show that the same result can be obtained by Corollary 4.2.…”
Section: Qr Decompositionmentioning
confidence: 99%
“…The decrease of the exponent of MM implies theoretical acceleration of the solution of a number of important problems in various areas of computations in Algebra and Computer Science, such as Boolean MM, computation of paths and distances in graphs, parsing context-free grammars, the solution of a nonsingular linear system of equations, computations of the inverse, determinant, characteristic and minimal polynomials, and various factorizations of a matrix. See [142], [34], [1, Sections 6.3-6.6], [24, pages 49-51], [18, Chapter 2], [3], [79], [48], [98], [160], [157], [158], [159], [86], [25], [89], [6], [54], [156], [132], [96], [4], [138], [140], [103], [104], [105], [125], and the bibliography therein and notice that some new important applications have been found very recently, e.g., in 4 papers at ISSAC 2016.…”
Section: Summary Of the Study Of The MM Exponents After 1978mentioning
confidence: 99%
“…Already Strassen in 1969 extended MM to matrix inversion (we sketch this in Examples 11.1 and 11.2 below), then Bunch and Hopcroft [34] extended MM to invertible LUP factorization, and further extensions followed in the eighties, for instance, in [82] to the computation of the matrix rank and of Gaussian elimination. Efficient algorithms whose complexity is sensitive to the rank or to the rank profile have only very recently been discovered [83,64,140]. One of the latest reductions to MM was that of the characteristic polynomial, which has extended the seminal work of [90] more than thirty years afterwards [126].…”
Section: Acceleration Of Computations Via Reductions To MMmentioning
confidence: 99%
“…We first propose quadratic, space and verification time, non-interactive practical certificates for the row or column rank profile and for the rank profile matrix that are rank-sensitive. Previously known certificates either had additional logarithmic factors to the quadratic complexities [14] or are not practical [16].…”
Section: Introductionmentioning
confidence: 99%