1997
DOI: 10.1137/1.9781611971446
|View full text |Cite
|
Sign up to set email alerts
|

Applied Numerical Linear Algebra

Abstract: I am looking for suggestions for a research project in applied/numerical linear algebra. As far as requirements, there really aren't any except Applied Numerical Linear Algebra-James W. Demmel.djvu Linear Equation Solving. Estimating Condition Numbers. To compute a practical error bound based on a bound like 2.5, we need to estimate A?1. This is Numerical Linear Algebra Numerical linear algebra

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

7
1,625
0
37

Year Published

2000
2000
2015
2015

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 1,990 publications
(1,669 citation statements)
references
References 0 publications
7
1,625
0
37
Order By: Relevance
“…It utilizes this subspace to approximate the eigenvectors and the corresponding eigenvalues of A. In order to find all the eigenvalues, A is generally reduced to a n × n tridiagonal matrix T [5] because there are efficient algorithms available for finding the eigenvalues of T. The main intuition behind the Lanczos method is that it generates a partial tridiagonal matrix T k (see Algorithm 1) from A where the extremal eigenvalues of T k are the optimal approximations of the extremal eigenvalues of A [8] from the subspace K k .…”
Section: The Lanczos Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…It utilizes this subspace to approximate the eigenvectors and the corresponding eigenvalues of A. In order to find all the eigenvalues, A is generally reduced to a n × n tridiagonal matrix T [5] because there are efficient algorithms available for finding the eigenvalues of T. The main intuition behind the Lanczos method is that it generates a partial tridiagonal matrix T k (see Algorithm 1) from A where the extremal eigenvalues of T k are the optimal approximations of the extremal eigenvalues of A [8] from the subspace K k .…”
Section: The Lanczos Methodsmentioning
confidence: 99%
“…In the particular case of the symmetric extremal eigenvalue problem, we are only interested in finding either λ max , λ min or both eigenvalues. There are two main families of methods for solving the extremal eigenvalue problem: direct methods and iterative methods [8]. Direct methods compute eigenvalues in one shot, however, they incur a computation cost of Θ(n 3 ) and are more applicable when all the eigenvalues and the corresponding eigenvectors are required.…”
Section: Symmetric Extremal Eigenvalue Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…While different implementation may render different numerical behavior, mathematically 1 the kth approximate solution x k by CG is the optimal one in the sense that the kth approximation error A −1 b − x k satisfies [11,Theorem 6:1] (1.1) Here the superscript "· * " takes conjugate transpose. In practice, x k is computed recursively from x k−1 via short term recurrences [4,7,10,19]. But exactly how it is computed, though extremely crucial in practice, is not important to our analysis here in this paper.…”
Section: Introductionmentioning
confidence: 99%
“…In fact, we have the following well known and frequently referenced error bound (see, e.g., [4,10,19,22]):…”
Section: Introductionmentioning
confidence: 99%