Abstract. For a sparse symmetric matrix, there has been much attention given to algorithms for reducing the bandwidth. As far as we can see, little has been done for the unsymmetric matrix A, which has distinct lower and upper bandwidths l and u. When Gaussian elimination with row interchanges is applied, the lower bandwidth is unaltered, while the upper bandwidth becomes l + u. With column interchanges, the upper bandwidth is unaltered, while the lower bandwidth becomes l + u. We therefore seek to reduce min(l, u) + l + u, which we call the total bandwidth. We compare applying the reverse Cuthill-McKee algorithm to A + A T , to the row graph of A, and to the bipartite graph of A. We also propose an unsymmetric variant of the reverse Cuthill-McKee algorithm. In addition, we have adapted the node-centroid and hill-climbing ideas of Lim, Rodrigues, and Xiao to the unsymmetric case. We have found that using these to refine a Cuthill-McKee-based ordering can give significant further bandwidth reductions. Numerical results for a range of practical problems are presented and comparisons made with the recent lexicographical method of Baumann, Fleischmann, and Mutzbauer. 1. Introduction. If Gaussian elimination is applied without interchanges to an unsymmetric matrix A = {a ij } of order n, each fill-in takes place between the first entry of a row and the diagonal or between the first entry of a column and the diagonal. It is therefore sufficient to store all the entries in the lower triangle from the first entry in each row to the diagonal and all the entries in the upper triangle from the first entry in each column to the diagonal. This simple structure allows straightforward code using static data structures to be written. We will call the sum of the lengths of the rows the lower profile and the sum of the lengths of the columns the upper profile.We will also use the term lower bandwidth for l = max aij =0 (i − j) and the term upper bandwidth for u = max aij =0 (j − i). For a symmetric matrix, these are the same and are called the semibandwidth. A particularly simple data structure is available by taking account of only the bandwidths l and u. If row interchanges are used for stability reasons during the factorization, it may be readily verified that the lower bandwidth remains l but the upper bandwidth may increase to l + u. With column interchanges (or row interchanges applied while factorizing A T ), the upper bandwidth is unaltered, while the lower bandwidth becomes l + u. We may therefore always have one triangular factor of bandwidth min(l, u) and the other of bandwidth l + u. Thus we seek to reduce min(l, u) + l + u, which we call the total bandwidth.Many algorithms for reducing the bandwidth of a sparse symmetric matrix A have been proposed in the literature, most of which make extensive use of the adjacency graph G of the matrix. This is an undirected graph that has a node for each row (or