Abstract.When combined with Krylov projection methods, polynomial filtering can provide a powerful method for extracting extreme or interior eigenvalues of large sparse matrices. This general approach can be quite efficient in the situation when a large number of eigenvalues is sought. However, its competitiveness depends critically on a good implementation. This paper presents a technique based on such a combination to compute a group of extreme or interior eigenvalues of a real symmetric (or complex Hermitian) matrix. The technique harnesses the effectiveness of the Lanczos algorithm with partial reorthogonalization and the power of polynomial filtering. Numerical experiments indicate that the method can be far superior to competing algorithms when a large number of eigenvalues and eigenvectors is to be computed.Key words. Lanczos algorithm; polynomial filtering; partial reorthogonalization; interior eigenvalue problems.1. Introduction. The problem addressed in this paper is to compute eigenvalues located in a specified interval of a large real symmetric or complex Hermitian matrix, along with their associated eigenvectors. The interval, which we will also refer to as a 'window', can consist of a subset of the largest or smallest eigenvalue, in which case the eigenvalues requested are in one of the two ends of the spectrum. When the window is well inside the interval containing the spectrum, this is often referred to as an 'interior eigenvalue problem'. Eigenvalues in the inner portion of the spectrum are called 'interior eigenvalues', though this is clearly a loose definition.Computing a large number of interior eigenvalues of a large symmetric matrix remains one of the most difficult problems in computational linear algebra today. The classical approach to the problem is to use a form of shift-and-invert technique. If we are interested in eigenvalues around a certain shift σ, shift-andinvert consists of using a projection-type method (subspace iteration, Lanczos) to compute the eigenvalues of the matrix (A − σI) −1 . The eigenvalues (λ i − σ) −1 of this matrix become the dominant eigenvalues for those λ i 's close to σ and as a result they are easy to compute with the projection method. This approach has been the most common in structural analysis codes [1]. Computational codes based on this approach select a shift dynamically and perform a factorization of the matrix A − σI (or A − σB for the generalized case).There are a number of situations when shift-and-invert will be either inapplicable, or too slow to be of practical interest. For example, it is known that problems based on a 3-D physical mesh will tend to give matrices that are very expensive to factor due to both computational and memory requirements. There are also situations when the matrix A is not available explicitly but only through subroutines to perform matrix-vector products. Finally, in the situation, common in electronic structure calculations, when a very large number of eigenvalues is to be computed, the number of factorizations to be p...