2014
DOI: 10.1016/j.camwa.2014.09.009
|View full text |Cite
|
Sign up to set email alerts
|

A preconditioned nested splitting conjugate gradient iterative method for the large sparse generalized Sylvester equation

Abstract: a b s t r a c tA nested splitting conjugate gradient (NSCG) iterative method and a preconditioned NSCG (PNSCG) iterative method are presented for solving the generalized Sylvester equation with large sparse coefficient matrices, respectively. Both methods are actually inner/outer iterations, which employ the CG-like method as inner iteration to approximate each outer iteration, while each outer iteration is induced by a convergent and symmetric positive definite splitting of the coefficient matrices. Convergen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 24 publications
(9 citation statements)
references
References 23 publications
0
9
0
Order By: Relevance
“…In this section, we apply the GADI framework to solve the matrix equation. We use a representative example, i.e., the continuous Sylvester equation, which has been widely used in control theory and numerical PDEs [16,26,28], to demonstrate the implementation. Concretely, the continuous Sylvester equation can be written as…”
Section: Algorithm 22 Pcg (Pcgne) For Solving (αI + Hmentioning
confidence: 99%
See 2 more Smart Citations
“…In this section, we apply the GADI framework to solve the matrix equation. We use a representative example, i.e., the continuous Sylvester equation, which has been widely used in control theory and numerical PDEs [16,26,28], to demonstrate the implementation. Concretely, the continuous Sylvester equation can be written as…”
Section: Algorithm 22 Pcg (Pcgne) For Solving (αI + Hmentioning
confidence: 99%
“…There are two main methods for selecting parameters. One is traversing parameters or experimental determination within some intervals to obtain relatively optimal parameters [2,16,28]. The advantage of this traversal method is that it can obtain relatively accurate optimal parameters, but obviously, it consumes a lot of extra time.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Problem 1 Given A ∈ R m×s , B ∈ R l×n and C ∈ R m×n , find nonnegative matrices X ∈ R s×n and Y ∈ R m×l such that AX + Y B = C. The class of the linear matrix equation has been investigated by many authors and a series of important and useful results has been obtained (Baksalary and Kala 1979;Ziȩtak 1984Ziȩtak , 1988Zak 1985;Yong 2006;Huang 2004;Ke and Ma 2014). For example, Baksalary and Kala (1979) obtained a necessary and sufficient condition for solvability and the general solution of the matrix equation AX − Y B = C. In Ziȩtak (1984), Ziȩtak analyzed and gave an algorithm for the l p solutions of (1.1).…”
Section: Introductionmentioning
confidence: 99%
“…And Krylov subspace methods are very slow or even fail to converge if not conveniently preconditioned. Therefore, many researchers devote themselves to the preconditioned iterative methods (1.1) (see [9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27]). And kinds of preconditioners for saddle point matrix are studied, such as symmetric indefinite preconditioners [28,29], inexact constraint preconditioners [29][30][31][32][33][34] and primal-based penalty preconditioners [35].…”
Section: Introductionmentioning
confidence: 99%