1979
DOI: 10.1007/bf02243566
|View full text |Cite
|
Sign up to set email alerts
|

Approximating the inverse of a matrix for use in iterative algorithms on vector processors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
64
0
1

Year Published

1980
1980
2013
2013

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 158 publications
(65 citation statements)
references
References 3 publications
0
64
0
1
Order By: Relevance
“…Among the first techniques of this type we mention polynomial preconditioners, which are based on approximating the inverse of the coefficient matrix A with a low-degree polynomial in the matrix. These methods have a long history (see, e.g., [20,24]), but came into vogue only after the first vector processors had become available [38,58]. Polynomial preconditioners only require matrixvector products with A and therefore have excellent potential for parallelization, but they are not as effective as incomplete factorization methods at reducing the number of iterations.…”
Section: Introductionmentioning
confidence: 99%
“…Among the first techniques of this type we mention polynomial preconditioners, which are based on approximating the inverse of the coefficient matrix A with a low-degree polynomial in the matrix. These methods have a long history (see, e.g., [20,24]), but came into vogue only after the first vector processors had become available [38,58]. Polynomial preconditioners only require matrixvector products with A and therefore have excellent potential for parallelization, but they are not as effective as incomplete factorization methods at reducing the number of iterations.…”
Section: Introductionmentioning
confidence: 99%
“…For example, on the CDC STAR algorithms are sought that yield vectors whose elements can be stored contiguously and whose lengths" are on the order of hundreds and preferably thousands. Determination of algorithms satisfying these conditions has yielded programs that perform two to four times faster than their CDC 7600 counterparts {7, 8,9]. Straightforward implementation of "scalar algorithms" on the STAR results in performance that is substantially less than that of the CDC 7600.…”
Section: Cmentioning
confidence: 99%
“…And is the estimated from previous iteration. Equation (22) is also called Guttman transform by De Leeuw and Heiser [14]. Although V † could be calculated separately from SMACOF algorithm since V is static during the iterations, the time complexity of full rank matrix inversion is always (…”
Section: Weighted Da-smacofmentioning
confidence: 99%
“…Since ̇ is an SPD matrix, we could solve (23) instead of (22) without doing the pseudo-inverse of . To address this issue, a well-known iterative approximation method to solve the = form equation, so called Conjugate Gradient (CG) [25] could be used here.…”
Section: Weighted Da-smacofmentioning
confidence: 99%