Abstract. In this paper we present a communication avoiding ILU0 preconditioner for solving large linear systems of equations by using iterative Krylov subspace methods. Recent research has focused on communication avoiding Krylov subspace methods based on so-called s-step methods. However, there are not many communication avoiding preconditioners yet, and this represents a serious limitation of these methods. Our preconditioner allows us to perform s iterations of the iterative method with no communication, through ghosting some of the input data and performing redundant computation. To avoid communication, an alternating reordering algorithm is introduced for structured and well partitioned unstructured matrices, which requires the input matrix to be ordered by using a graph partitioning technique such as k-way or nested dissection. We show that the reordering does not affect the convergence rate of the ILU0 preconditioned system as compared to kway or nested dissection ordering, while it reduces data movement and is expected to reduce the time needed to solve a linear system. In addition to communication avoiding Krylov subspace methods, our preconditioner can be used with classical methods such as GMRES to reduce communication. In the parallel case, the input matrix is distributed over processors, and each iteration involves multiplying the input matrix with a vector, followed by an orthogonalization process. Both these operations require communication among processors. Since A is usually very sparse, the communication dominates the overall cost of the iterative methods when the number of processors is increased to a large number. While the matrix-vector product can be performed by using pointto-point communication routines between subsets of processors, the orthogonalization step requires the usage of collective communication routines, and these routines are known to scale poorly. More generally, on current machines the cost of communication, the movement of data, is much higher than the cost of arithmetic operations, and this gap is expected to continue to increase exponentially. As a result, communication is often the bottleneck in numerical algorithms.In a quest to address the communication problem, recent research has focused on reformulating linear algebra operations such that the movement of data is significantly reduced or even minimized, as in the case of dense matrix factorizations [12,20,3]. Such algorithms are referred to as communication avoiding. The communication avoiding Krylov subspace methods [29,24,6] are based on s-step methods