1981
DOI: 10.1007/bf01086544
|View full text |Cite
|
Sign up to set email alerts
|

Computational methods of linear algebra

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0

Year Published

1989
1989
2023
2023

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 91 publications
(60 citation statements)
references
References 843 publications
0
58
0
Order By: Relevance
“…Gradient Descent: For a rank-one problem, a steepest gradient algorithm has been shown to converge globally to an eigenvalue [19]. This eigenvalue is the optimum one if the starting point is not orthogonal to the optimum eigenvector.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Gradient Descent: For a rank-one problem, a steepest gradient algorithm has been shown to converge globally to an eigenvalue [19]. This eigenvalue is the optimum one if the starting point is not orthogonal to the optimum eigenvector.…”
Section: Introductionmentioning
confidence: 99%
“…Congugate Gradient: The global convergence result of [19] is extended to a generalized RQ conjugate-gradient algorithm for rank one in [21]. Local convergence properties of conjugate-gradient algorithms are discussed in [22], [23], showing faster convergence of conjugate-gradient than steepest descent.…”
Section: Introductionmentioning
confidence: 99%
“…From our trials, it is obvious that in all three cases the rate of convergence of our new algorithm is better or at least as fast as the power [4]. The QR [13] algorithm converges very slowly in the last two cases, when the separation between the eigenvalues is poor.…”
Section: Corollary 3: Any Oepomo's Alternating Sequence Iteration Conmentioning
confidence: 85%
“…Following the arguments of Faddeev and Faddeeva [7,8], this Relaxed Monte Carlo method will converge if…”
Section: Relaxed Monte Carlo Methodsmentioning
confidence: 99%