2008 IEEE International Conference on Computer-Aided Control Systems 2008
DOI: 10.1109/cacsd.2008.4627370
|View full text |Cite
|
Sign up to set email alerts
|

Memory-efficient Krylov subspace techniques for solving large-scale Lyapunov equations

Abstract: This paper considers the solution of large-scale Lyapunov matrix equations of the form AX +XA T = −bb T. The Arnoldi method is a simple but sometimes ineffective approach to deal with such equations. One of its major drawbacks is excessive memory consumption caused by slow convergence. To overcome this disadvantage, we propose two-pass Krylov subspace methods, which only compute the solution of the compressed equation in the first pass. The second pass computes the product of the Krylov subspace basis with a l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
16
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(17 citation statements)
references
References 32 publications
1
16
0
Order By: Relevance
“…In the right-hand side of Table 6.1 we report the results in case the residual norm is computed every 10 iterations. Table 6.2 shows that the two-pass strategy of Section 3.3 drastically reduces the memory requirements of the solution process, as already observed in [12], at a negligible percentage of the total execution time. where A, E ∈ R n×n , n = 79841, C ∈ R n×s , s = 7.…”
Section: Largesupporting
confidence: 57%
See 1 more Smart Citation
“…In the right-hand side of Table 6.1 we report the results in case the residual norm is computed every 10 iterations. Table 6.2 shows that the two-pass strategy of Section 3.3 drastically reduces the memory requirements of the solution process, as already observed in [12], at a negligible percentage of the total execution time. where A, E ∈ R n×n , n = 79841, C ∈ R n×s , s = 7.…”
Section: Largesupporting
confidence: 57%
“…In case of K ◻ m a "two-pass" strategy is implemented to avoid storing the whole basis V m ; see [12] for earlier use of this device in the same setting, and, e.g., [7] in the matrix function context.…”
Section: Introduction Consider the Sylvester Matrix Equationmentioning
confidence: 99%
“…Indeed, for A and B symmetric, not necessarily equal, an orthogonal basis of each standard Krylov subspace together with the projected matrix could be generated without storing the whole basis, but only the last three (block) vectors, because the orthogonalization process reduces to the short-term Lanczos recurrence [220]. Therefore, in a first-pass only the projected solution Y could be determined while limiting the storage for V k and W j ; at convergence the approximate solution X = V k YW j could be recovered by generating the two bases once again, and updating X on the fly with the already computed Y; an implementation of such approach can be found in [162] for B = A and C 1 = C 2 . The same idea could be used for other situations where a short-term recurrence is viable; the effectiveness of the overall method strongly depends on the affordability of computing the two bases twice.…”
Section: Sylvester Equation Largementioning
confidence: 99%
“…For symmetric (positive definite) A , B , a two‐pass SKSM, such as the one discussed in Reference 29, could be used. During the first pass, only the projected equation is constructed and solved; in the second pass, the method computes the product of the Krylov subspace bases with the low‐rank factors of the projected solution.…”
Section: Introductionmentioning
confidence: 99%