2016
DOI: 10.1016/j.jcp.2016.09.001
|View full text |Cite
|
Sign up to set email alerts
|

Estimating the trace of the matrix inverse by interpolating from the diagonal of an approximate inverse

Abstract: A number of applications require the computation of the trace of a matrix that is implicitly available through a function. A common example of a function is the inverse of a large, sparse matrix, which is the focus of this paper. When the evaluation of the function is expensive, the task is computationally challenging because the standard approach is based on a Monte Carlo method which converges slowly. We present a different approach that exploits the pattern correlation, if present, between the diagonal of t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
29
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 34 publications
(30 citation statements)
references
References 39 publications
0
29
0
1
Order By: Relevance
“…Hence, to efficiently compute the quantity m , it suffices to insert a few lines related to (12) and (13) into the existing Lanczos iteration (1). Then, with m , the incremental error is computed in a straightforward manner by using (11). We now put back the index k and summarize this computation in Algorithm 1.…”
Section: Iterative Algorithm For Computing the Incremental Errormentioning
confidence: 99%
See 1 more Smart Citation
“…Hence, to efficiently compute the quantity m , it suffices to insert a few lines related to (12) and (13) into the existing Lanczos iteration (1). Then, with m , the incremental error is computed in a straightforward manner by using (11). We now put back the index k and summarize this computation in Algorithm 1.…”
Section: Iterative Algorithm For Computing the Incremental Errormentioning
confidence: 99%
“…The trace of a function of a matrix, tr( f(A)), occurs in diverse areas including scientific computing, statistics, and machine learning. [1][2][3][4][5][6][7][8][9][10][11][12] Often in applications, the matrix A is so large that explicitly forming f(A) is not a practically viable option. In this work, we focus on the case when A ∈ R n×n is symmetric positive-definite, so that it admits a spectral decomposition Q T AQ = diag( 1 , … , n ), where the i are real positive eigenvalues and Q is the matrix of normalized eigenvectors.…”
Section: Introductionmentioning
confidence: 99%
“…There are three main factors contributing to the computational complexity: (i) computation of the K max smallest eigenvectors of L N , (ii) K-means clustering, and (iii) computation of the community detection loss function and the model mismatch metric. The overall computational complexity of each method is summarized in Table I. For (i), computing the K max smallest eigenvectors of L N requires O(K max (m + n)) operations using power iteration techniques [35], [36], [37], [38], [39], where m+n is the number of nonzero entries in L N . For (ii), given any K ≤ K max , K-means clustering on the rows of the K smallest eigenvectors of L N requires O(nK 2 ) operations [40].…”
Section: Computational Complexity Analysismentioning
confidence: 99%
“…The variance of this estimator grows with N 2 and can be very large even when the diagonal elements of A −1 have a small variance. Sophisticated versions of this approach can be found in [1,2].…”
Section: Introductionmentioning
confidence: 99%