1996
DOI: 10.1007/bf02141745
|View full text |Cite
|
Sign up to set email alerts
|

An adaptive Richardson iteration method for indefinite linear systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

1996
1996
2014
2014

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(11 citation statements)
references
References 26 publications
0
11
0
Order By: Relevance
“…In the latter case, the set K m is replaced by two intervals on the real axis, one on each side of the origin. A Richardson method for the solution of (1.1) when A is symmetric indefinite and s = 1 is presented in [5] and the approach there generalizes in an obvious manner to the case s > 1. We therefore omit the details.…”
Section: Algorithm 42 (Adaptive Richardson Method)mentioning
confidence: 99%
“…In the latter case, the set K m is replaced by two intervals on the real axis, one on each side of the origin. A Richardson method for the solution of (1.1) when A is symmetric indefinite and s = 1 is presented in [5] and the approach there generalizes in an obvious manner to the case s > 1. We therefore omit the details.…”
Section: Algorithm 42 (Adaptive Richardson Method)mentioning
confidence: 99%
“…These well-known facts have been verified for the rectangular cavity [9]. The algorithms that successfully solved all problem sizes were the implicitly restarted Lanczos algorithm [10,21,22] and the Jacobi-Davidson algorithm [11,23,24]. Like Lanczos, these algorithms converge superlinearly but, like subspace iteration, their memory requirements are determined by the order n of the problem times a small multiple of the number p of the desired eigenvalues.…”
Section: Solving the Matrix Eigenvalue Problemmentioning
confidence: 98%
“…A 1 and M 1 have been defined in (21). It turned out that this preconditioner leads to an algorithm that is very similar to the one of the straightforward approach.…”
Section: Solving the Constrained System Of Equation: The Augmentementioning
confidence: 99%
“…For example, when discretizing the two point boundary value problems or the partial differential equations that frequently appear in oilreservoir engineering, in weather forecasting, or in electronic device modelling among others, linear systems like (1) need to be solved (see, e.g., [1,16]). The well-known Richardson's method (also known as Chebyshev method) and its variations are characterized by using the residual vector, r (x) = b − Ax, as search direction to solve linear systems iteratively (see, e.g., [7,11,12,24,26]). In general, these variations of Richardson's method have not been considered to be competitive with Krylov subspace methods, which represent nowadays the best-known options for solving (1), specially when combined with suitable preconditioning strategies.…”
Section: Introductionmentioning
confidence: 99%
“…The most important is the use of the spectral steplength choice, also known as the BarzilaiBorwein choice, ( [2,13,17,22]) that has proved to yield fast local convergence for the solution of nonlinear optimization problems ( [4,5,6,15,23]). However, this special choice of step size cannot guarantee global convergence by itself, as it usually happens with other variations (see, e.g., [7,11,12,24,26]). For that, we combine its use with a tolerant globalization strategy, that represents the second new feature.…”
Section: Introductionmentioning
confidence: 99%