2011
DOI: 10.1137/09077432x
|View full text |Cite
|
Sign up to set email alerts
|

A Fast Parallel Algorithm for Selected Inversion of Structured Sparse Matrices with Application to 2D Electronic Structure Calculations

Abstract: An efficient parallel algorithm is presented and tested for computing selected components of H −1 where H has the structure of a Hamiltonian matrix of two-dimensional lattice models with local interaction. Calculations of this type are useful for several applications, including electronic structure analysis of materials in which the diagonal elements of the Green's functions are needed. The algorithm proposed here is a direct method based on an LDL T factorization. The elimination tree is used to organize the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 43 publications
(22 citation statements)
references
References 38 publications
0
22
0
Order By: Relevance
“…The future improvement includes treating C and H as sparse matrices so that the construction of the Hamiltonian matrix C t HC and the mass matrix C t C is of linear scaling. By treating C and H as sparse matrices, we can also incorporate the recently developed pole expansion and selected inversion type fast algorithms [25][26][27][28][29][30] to reduce the asymptotic scaling for solving the generalized eigenvalue problem (10) from cubic scaling to at most quadratic scaling for 3D bulk systems. We also remark that the current procedure for constructing the orbitals from adaptive local basis functions is still a costly procedure inside each element.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The future improvement includes treating C and H as sparse matrices so that the construction of the Hamiltonian matrix C t HC and the mass matrix C t C is of linear scaling. By treating C and H as sparse matrices, we can also incorporate the recently developed pole expansion and selected inversion type fast algorithms [25][26][27][28][29][30] to reduce the asymptotic scaling for solving the generalized eigenvalue problem (10) from cubic scaling to at most quadratic scaling for 3D bulk systems. We also remark that the current procedure for constructing the orbitals from adaptive local basis functions is still a costly procedure inside each element.…”
Section: Discussionmentioning
confidence: 99%
“…However, notice that one only needs the knowledge of the diagonal blocks of the Gram matrix G to construct the electron density. This allows us to use the recently developed pole expansion and selected inversion type fast algorithms [25][26][27][28][29][30] to reduce the asymptotic scaling for solving the generalized eigenvalue problem (10) from cubic scaling to at most quadratic scaling for 3D bulk systems. For simplicity we employ a cubic scaling implementation within the current work, as described in more detail in Section IV.…”
Section: Element Orbitalsmentioning
confidence: 99%
“…Both the factorization and inversion steps scale with Nd+12, where d is 3 for bulk systems, 2 for slabs and surfaces, and 1 for long and thin systems. The number of iterations for the chemical potential varies significantly depending on the initial guess for the chemical potential, band gap, and the required accuracy . The number of poles (typically 40‐80) in PEXSI is independent of the system size, but depends on the given electronic temperature and spectral width (the difference between max/min eigenvalues) of the Kohn‐Sham Hamiltonian.…”
Section: Comparisons With Other Methodsmentioning
confidence: 99%
“…The number of iterations for the chemical potential varies significantly depending on the initial guess for the chemical potential, band gap, and the required accuracy . The number of poles (typically 40‐80) in PEXSI is independent of the system size, but depends on the given electronic temperature and spectral width (the difference between max/min eigenvalues) of the Kohn‐Sham Hamiltonian. On the other hand, the computational cost for SIPs is ns×tS, where ns is the total number of shifts and tS is the combined cost of one numerical factorization and the solution/orthogonalization step per shift.…”
Section: Comparisons With Other Methodsmentioning
confidence: 99%
“…To obtain these selected elements, we need to compute the corresponding elements of (H − (z l + µ)S) −1 for all z l . The recently developed selected inversion method [17][18][19] provides an efficient way of computing the selected elements of an inverse matrix. For a symmetric matrix of the form A = H − zS, the selected inversion algorithm first constructs an LDL T factorization of A, where L is a block lower diagonal matrix called the Cholesky factor, and D is a block diagonal matrix.…”
Section: A Basis Expansion By Nonorthogonal Basis Functionsmentioning
confidence: 99%