Abstract. Optimization algorithms typically require the solution of many systems of linear equations Bkyk b,. When large numbers of variables or constraints are present, these linear systems could account for much of the total computation time.Both direct and iterative equation solvers are needed in practice. Unfortunately, most of the off-the-shelf solvers are designed for single systems, whereas optimization problems give rise to hundreds or thousands of systems. To avoid refactorization, or to speed the convergence of an iterative method, it is essential to note that B is related to Bk_ 1.We review various sparse matrices that arise in optimization, and discuss compromises that are currently being made in dealing with them. Since significant advances continue to be made with single-system solvers, we give special attention to methods that allow such solvers to be used repeatedly on a sequence of modified systems (e.g., the product-form update; use of the Schur complement). The speed of factorizing a matrix then becomes relatively less important than the efficiency of subsequent solves with very many right-hand sides.At the same time, we hope that future improvements to linear-equation software will be oriented more specifically to the case of related matrices B k.