Abstract. This paper addresses the parallelization of the preconditioned iterative methods that use explicit preconditioners such as approximate inverses. Parallelizing a full step of these methods requires the coefficient and preconditioner matrices to be well partitioned. We first show that different methods impose different partitioning requirements for the matrices. Then we develop hypergraph models to meet those requirements. In particular, we develop models that enable us to obtain partitionings on the coefficient and preconditioner matrices simultaneously. Experiments on a set of unsymmetric sparse matrices show that the proposed models yield effective partitioning results. A parallel implementation of the right preconditioned BiCGStab method on a PC cluster verifies that the theoretical gains obtained by the models hold in practice.Key words. matrix partitioning, preconditioning, iterative method, parallel computing AMS subject classifications. 05C50, 05C65, 65F10, 65F35, 65F50, 65Y05 DOI. 10.1137/040617431 1. Introduction. We consider the parallelization of the preconditioned iterative methods that use explicit preconditioners such as approximate inverses or factored approximate inverses. Our objective is to develop methods for obtaining onedimensional (1D) partitions on a coefficient matrix and a preconditioner matrix or factors of a preconditioner matrix simultaneously to efficiently parallelize a full step of the preconditioned iterative methods. We assume preconditioner matrices or their sparsity patterns are available beforehand. It has been shown that the rates of convergence of iterative methods depend on the partitioning method when the preconditioners are built from partitioned coefficient matrices [26]. With the above assumption in mind, we neither deteriorate nor improve the effects of the selected preconditioners on the rate of convergence. Our assumption is justified in applications where the preconditioner matrices can be reused; see, for example, [12] and a discussion of it in [10]. The assumption is also justified in the preconditioner constructing methods that require a priori sparsity patterns for the preconditioner matrices [44,45], where techniques to develop effective sparsity patterns already exist in the literature [23,24,39].Approximate inverse preconditioning techniques explicitly compute and store a sparse matrix M ≈ A −1 to be used as a preconditioner. Application of such preconditioners requires one or two matrix-vector multiply operations. Two types of approximate inverses exist in the literature. In the first type, an approximate inverse is stored as a single matrix, whereas in the second type it is stored as a product of two matrices. The second type of preconditioners are referred to as factored approximate inverses. Among the most notable approximate inverse preconditioners are AINV and its variants by Benzi et al. [5,6,7,8]; SPAI by Grote and Huckle [33]; FSAI by Kolotilina and Yeremin [44,45]; and MR by Chow and Saad [25]. See [4,9,31] for