Motivated by the Cayley-Hamilton theorem, a novel adaptive procedure, called a Power Sparse Approximate Inverse (PSAI) procedure, is proposed that uses a different adaptive sparsity pattern selection approach to constructing a right preconditioner M for the large sparse linear system Ax = b. It determines the sparsity pattern of M dynamically and computes the n independent columns of M that is optimal in the Frobenius norm minimization, subject to the sparsity pattern of M. The PSAI procedure needs a matrix-vector product at each step and updates the solution of a small least squares problem cheaply. To control the sparsity of M and develop a practical PSAI algorithm, two dropping strategies are proposed. The PSAI algorithm can capture an effective approximate sparsity pattern of A −1 and compute a good sparse approximate inverse M efficiently. Numerical experiments are reported to verify the effectiveness of the PSAI algorithm. Numerical comparisons are made for the PSAI algorithm and the adaptive SPAI algorithm proposed by Grote and Huckle as well as for the PSAI algorithm and three static Sparse Approximate Inverse (SAI) algorithms. The results indicate that the PSAI algorithm is at least comparable to and can be much more effective than the adaptive SPAI algorithm and it often outperforms the static SAI algorithms very considerably and is more robust and practical than the static ones for general problems. INVERSE PRECONDITIONING PROCEDURE 261 forms do, because they can express denser matrices than the total number of nonzero entries in their factors. However, factorized forms have their own drawbacks, and like ILU [7] they can fail due to breakdown during an incomplete factorization process. A comprehensive survey of sparse approximate inverse preconditioners, together with extensive numerical comparisons aimed at assessing the overall performance of the various methods, can be found in [46].
POWER SPARSE APPROXIMATEWe concentrate on the second kind of SAI preconditioners in this paper. A key issue is to determine the sparsity pattern of M effectively. The initial work is to prescribe it in advance [21][22][23].Once that is given, the computation of M is straightforward and one can solve n independent small least squares problems to get all the columns of M. This is called a static SAI procedure. M is required to be sparse as well as to approximate A −1 . Much work has been done in prescribing the sparsity pattern of M; e.g. [11,19,20,30,39,40,43]. The most common a priori pattern of M is simply that of A, and this is called the structure of 1-local matrix [22]. This choice can give good results for many problems but can also fail for many problems. One improvement is to use the sparsity structure of q-local matrix for q>1. However, M becomes denser quickly when q increases, even though q = 2 is often impractical [30].SAI techniques are based on the implicit assumption that the majority of the entries in A −1 are small, so that it is possible to find a sparse matrix M that is a good approximation to A −1 ....