1991
DOI: 10.1007/bf01385728
|View full text |Cite
|
Sign up to set email alerts
|

On sharp quadratic convergence bounds for the serial Jacobi methods

Abstract: Summary. Using a new technique we derive sharp quadratic convergence bounds for the serial symmetric and SVD Jacobi methods. For the symmetric Jacobi method we consider the cases of well and poorely separated eigenvalues. Our result implies the result proposed, but not correctly proved, by Van Kempen. It also extends the well-known result of Wilkinson to the case of multiple eigenvalues.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
50
0

Year Published

1999
1999
2016
2016

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 44 publications
(51 citation statements)
references
References 18 publications
1
50
0
Order By: Relevance
“…The symmetric Jacobi algorithm with rowcyclic strategy is quadratically convergent. This is a well known fact, and the proof of the general case of multiple eigenvalues is given by Hari [15]. Using the off-norm Ω(·) = · −diag(·) F , the quadratic convergence is stated as follows: …”
Section: Review Of Asymptotic Convergencementioning
confidence: 99%
“…The symmetric Jacobi algorithm with rowcyclic strategy is quadratically convergent. This is a well known fact, and the proof of the general case of multiple eigenvalues is given by Hari [15]. Using the off-norm Ω(·) = · −diag(·) F , the quadratic convergence is stated as follows: …”
Section: Review Of Asymptotic Convergencementioning
confidence: 99%
“…It was later shown in [9,10] that for matrix with well separated eigenvalues, the CJT has a quadratic convergence rate 3 . This result was extended in [11] to a more general case which includes identical eigenvalues and clusters of eigenvalues (that is, very close eigenvalues). Studies have shown that in practice the number of iterations that is required for the CJT to reach its asymptotic quadratic convergence rate is a small number, but this has not been proven rigorously.…”
Section: The One-bit Blind Null Space Learning Algorithm (Obnsla)mentioning
confidence: 94%
“…The theorem enables the SU to solve the optimization problems in (11) and (14) via line searches based on {x(n), q(n)} T n=1 . This is because under the OC, the SU can extracth(x(n),x(n − m)), which indicates whether S(G,x(n)) > S(G,x(n − m)) is true or false.…”
Section: A the One-bit Line Searchmentioning
confidence: 99%
See 2 more Smart Citations