2012
DOI: 10.1088/1751-8113/45/6/065204
|View full text |Cite
|
Sign up to set email alerts
|

Linearly scaling direct method for accurately inverting sparse banded matrices

Abstract: Abstract. In many problems in Computational Physics and Chemistry, one finds a special kind of sparse matrices, called banded matrices. These matrices, which are defined as having non-zero entries only within a given distance from the main diagonal, need often to be inverted in order to solve the associated linear system of equations. In this work, we introduce a new O(n) algorithm for solving such a system, with the size of the matrix being n × n. We derive analytical recursive expressions that allow us to di… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
5

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 52 publications
0
11
0
Order By: Relevance
“…To show this, first, we will prove that the value of the vectors p and q can be obtained in O(N c ) operations. Then, we will show that the same is true for all the non-zero entries of matrix R, and finally we will briefly discuss the results in [18], where we introduced an algorithm to solve the system in (2.6) also in O(N c ) operations.…”
Section: Calculation Of the Lagrange Multipliersmentioning
confidence: 86%
See 3 more Smart Citations
“…To show this, first, we will prove that the value of the vectors p and q can be obtained in O(N c ) operations. Then, we will show that the same is true for all the non-zero entries of matrix R, and finally we will briefly discuss the results in [18], where we introduced an algorithm to solve the system in (2.6) also in O(N c ) operations.…”
Section: Calculation Of the Lagrange Multipliersmentioning
confidence: 86%
“…When the semi-band width m is not constant along the whole matrix, things are more complicated and the cost is always between O(N c m 2 min ) and O(N c m 2 max ), depending on how the different rows are arranged. In general, we want to minimize the number of zero fillings in the process of Gaussian elimination (see [18] for further details), which is achieved by not having zeros below non-zero entries. This is easier to understand with an example: Consider the following matrices, where Ω and ω represent different non-zero values for every entry (i.e., not all ω, nor all Ω must take the same value, and different symbols have been chosen only to highlight the main diagonal):…”
Section: Ordering Of the Constraintsmentioning
confidence: 99%
See 2 more Smart Citations
“…whereL is a sparse matrix (taking advantage of the sparsity of a system of equations can greatly reduce the numerical complexity of its solution 57 ), y y y is known (y = −ρ/ε 0 , in this case) and x x x is the quantity we are solving for, in this case the electrostatic potential. Equation (Eq.…”
Section: Conjugate Gradientsmentioning
confidence: 99%