1986
DOI: 10.1016/0096-3003(86)90126-8
|View full text |Cite
|
Sign up to set email alerts
|

Linear algebra on high performance computers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

1988
1988
2015
2015

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(19 citation statements)
references
References 22 publications
0
19
0
Order By: Relevance
“…HPL uses the right-looking blocked LU decomposition algorithm [15]. It adopts the block-cyclic data distribution scheme [16].…”
Section: Hpl Algorithmmentioning
confidence: 99%
“…HPL uses the right-looking blocked LU decomposition algorithm [15]. It adopts the block-cyclic data distribution scheme [16].…”
Section: Hpl Algorithmmentioning
confidence: 99%
“…Several algorithms which improve the data locality for dense linear algebra problems have been suggested for shiuvd memoty systems (e.g.see [2], [4]). These algorithms are based on BLAS3 (Basic Linear Algebra level 3) modules consisting matrix times matrix operations.…”
Section: Introductionmentioning
confidence: 99%
“…experience on a variety of computers led Dongarra and Sorensen [7] to conclude that nearly optimal performance of numerical linear algebra code can be achieved if the subroutines for simple matrix and vector operations, such as BLAS [12], are written to perform on the computer at hand. We have written these subroutines so that they run efficiently on an IBM 3090 VF vector computer when the compiler VS FORTRAN 2.3.0 with parameter vlev = 2 is used.…”
Section: Much Computationalmentioning
confidence: 99%