Abstract.A new method for the parallel solution of large sparse linear systems is introduced. It proceeds by dividing the equations into blocks and operating in block-parallel iterative mode; i.e., all the blocks are processed in parallel, and the partial results are "merged" to form the next iterate. The new scheme performs Kaczmarz row projections within the blocks and merges the results by certain component-averaging operations-hence it is called component-averaged row projections, or CARP. The system matrix can be general, nonsymmetric, and ill-conditioned, and the division into blocks is unrestricted. For partial differential equations (PDEs), if the blocks are domain-based, then only variables at the boundaries between domains are averaged, thereby minimizing data transfer between processors. CARP is very robust; its application to test cases of linear systems derived from PDEs shows that it converges in difficult cases where state-of-the-art methods fail. It is also very memory efficient and exhibits an almost linear speedup ratio, with efficiency greater than unity in some cases. A formal proof of convergence is presented: It is shown that the component-averaging operations are equivalent to row projections in a certain superspace, so the convergence properties of CARP are identical to those of Kaczmarz's algorithm in the superspace. CARP and its convergence proof also apply to the consistent convex feasibility problem. 1. Introduction. Iterative methods for solving large sparse linear systems of equations are advantageous over the classical direct solvers, especially for huge systems. Methods of parallelizing iterative algorithms [20,29] divide the equations into blocks and usually fall into one of two modes of operation. In the block-sequential (also called block-iterative) mode, the blocks are processed sequentially, but the computations on each block's equations are done in parallel. Examples of block-sequential methods are found in [1,6,10]. In the second mode of operation, sometimes referred to as block-parallel, the blocks are assigned to different processors to be processed in parallel, and the results from all the blocks are then combined in some manner to produce the next iterate. Typical examples of block-parallel schemes are parallel versions of , RGMRES [30], conjugate gradient (CG), and CG-squared (CGS) [31]; other examples include those in [2,4,9,12,13].Kaczmarz's row projection method (KACZ) [23] was one of the first iterative methods used for large nonsymmetric systems. Its main advantages are robustness, guaranteed convergence on consistent systems, and cyclic convergence on inconsistent systems [8,16,32]. KACZ was also independently discovered in the context of image reconstruction from projections, where it is called ART (algebraic reconstruction technique) [21]. Kaczmarz's algorithm, by its nature and mathematical definition, is inherently sequential since, at each iterative step, the current iterate is projected