Abstract. We consider two-dimensional partitioning of general sparse matrices for parallel sparse matrix-vector multiply operation. We present three hypergraph-partitioning-based methods, each having unique advantages. The first one treats the nonzeros of the matrix individually and hence produces fine-grain partitions. The other two produce coarser partitions, where one of them imposes a limit on the number of messages sent and received by a single processor, and the other trades that limit for a lower communication volume. We also present a thorough experimental evaluation of the proposed two-dimensional partitioning methods together with the hypergraph-based one-dimensional partitioning methods, using an extensive set of public domain matrices. Furthermore, for the users of these partitioning methods, we present a partitioning recipe that chooses one of the partitioning methods according to some matrix characteristics.Key words. sparse matrix partitioning, parallel matrix-vector multiplication, hypergraph partitioning, two-dimensional partitioning, combinatorial scientific computing AMS subject classifications. 05C50, 05C65, 65F10, 65F50, 65Y05 DOI. 10.1137/080737770 1. Introduction. Sparse matrix-vector multiply operation forms the computational core of many iterative methods including solvers for linear systems, linear programs, eigensystems, and least squares problems. In these solvers, the computations y ← Ax are performed repeatedly with the same large, sparse, possibly unsymmetric or rectangular matrix A and with a changing input vector x. Our aim is to efficiently parallelize these multiply operations by two-dimensional (2D) partitioning of the matrix A in such a way that the computational load per processor is balanced and the communication overhead is low.Graph and hypergraph partitioning models have been used for one-dimensional (1D) partitioning of sparse matrices [4,5,8,9,19,20,25,26,30,32,37]. In these models, a K-way partition of the vertices of a given graph or hypergraph is computed. The partitioning constraint is to maintain a balance criterion on the number of vertices in each part; if the vertices are weighted, then the constraint is to maintain a balance criterion on the sum of the vertex weights in each part. The partitioning objective is to minimize the cutsize of the partition defined over the edges or hyperedges. The partitioning constraint and objective relate, respectively to, maintaining a computational