New methods for computing eigenvectors of symmetric block tridiagonal matrices based on twisted block factorizations are explored. The relation of the block where two twisted factorizations meet to an eigenvector of the block tridiagonal matrix is reviewed. Based on this, several new algorithmic strategies for computing the eigenvector efficiently are motivated and designed. The underlying idea is to determine a good starting vector for an inverse iteration process from the twisted block factorizations such that a good eigenvector approximation can be computed with a single step of inverse iteration.An implementation of the new algorithms is presented and experimental data for runtime behaviour and numerical accuracy based on a wide range of test cases are summarized. Compared with competing state-of-the-art tridiagonalization-based methods, the algorithms proposed here show strong reductions in runtime, especially for very large matrices and/or small bandwidths. The residuals of the computed eigenvectors are in general comparable with state-of-the-art methods. In some cases, especially for strongly clustered eigenvalues, a loss in orthogonality of some eigenvectors is observed. This is not surprising, and future work will focus on investigating ways for improving these cases.
We compare different algorithms for computing eigenvalues and eigenvectors of a symmetric band matrix across a wide range of synthetic test problems. Of particular interest is a comparison of state-of-the-art tridiagonalization-based methods as implemented in Lapack or Plasma on the one hand, and the block divide-and-conquer (BD&C) algorithm as well as the block twisted factorization (BTF) method on the other hand. The BD&C algorithm does not require tridiagonalization of the original band matrix at all, and the current version of the BTF method tridiagonalizes the original band matrix only for computing the eigenvalues.Avoiding the tridiagonalization process sidesteps the cost of backtransformation of the eigenvectors. Beyond that, we discovered another disadvantage of the backtransformation process for band matrices: In several scenarios, a lot of gradual underflow is observed in the (optional) accumulation of the transformation matrix and in the (obligatory) backtransformation step. According to the IEEE 754 standard for floating-point arithmetic, this implies many operations with subnormal (denormalized) numbers, which causes severe slowdowns compared to the other algorithms without backtransformation of the eigenvectors. We illustrate that in these cases the performance of existing methods from Lapack and Plasma reaches a competitive level only if subnormal numbers are disabled (and thus the IEEE standard is violated).Overall, our performance studies illustrate that if the problem size is large enough relative to the bandwidth, BD&C tends to achieve the highest performance of all methods if the spectrum to be computed is clustered. For test problems with well separated eigenvalues, the BTF method tends to become the fastest algorithm with growing problem size.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.