A new rational Krylov method for the efficient solution of nonlinear eigenvalue problems, A(λ)x = 0, is proposed. This iterative method, called fully rational Krylov method for nonlinear eigenvalue problems (abbreviated as NLEIGS), is based on linear rational interpolation and generalizes the Newton rational Krylov method proposed in [R. Van Beeumen, K. Meerbergen, and W. Michiels, SIAM J. Sci. Comput., 35 (2013), pp. A327-A350]. NLEIGS utilizes a dynamically constructed rational interpolant of the nonlinear function A(λ) and a new companion-type linearization for obtaining a generalized eigenvalue problem with special structure. This structure is particularly suited for the rational Krylov method. A new approach for the computation of rational divided differences using matrix functions is presented. It is shown that NLEIGS has a computational cost comparable to the Newton rational Krylov method but converges more reliably, in particular, if the nonlinear function A(λ) has singularities nearby the target set. Moreover, NLEIGS implements an automatic scaling procedure which makes it work robustly independently of the location and shape of the target set, and it also features low-rank approximation techniques for increased computational efficiency. Small-and large-scale numerical examples are included. From the numerical experiments we can recommend two variants of the algorithm for solving the nonlinear eigenvalue problem.
We propose a new uniform framework of Compact Rational Krylov (CORK) methods for solving large-scale nonlinear eigenvalue problems: A(λ)x = 0. For many years, linearizations are used for solving polynomial and rational eigenvalue problems. On the other hand, for the general nonlinear case, A(λ) can first be approximated by a (rational) matrix polynomial and then a convenient linearization is used. However, the major disadvantage of linearization based methods is the growing memory and orthogonalization costs with the iteration count, i.e., in general they are proportional to the degree of the polynomial. Therefore, the CORK family of rational Krylov methods exploits the structure of the linearization pencils by using a generalization of the compact Arnoldi decomposition. In this way, the extra memory and orthogonalization costs due to the linearization of the original eigenvalue problem are negligible for large-scale problems. Furthermore, we prove that each CORK step breaks down into an orthogonalization step of the original problem dimension and a rational Krylov step on small matrices. We also briefly discuss implicit restarting of the CORK method and how to exploit low rank structure. The CORK method is illustrated with two large-scale examples.Keywords : linearization, matrix pencil, rational Krylov, nonlinear eigenvalue problem MSC : 65F15, 15A22. COMPACT RATIONAL KRYLOV METHODS FOR NONLINEAR EIGENVALUE PROBLEMS *ROEL VAN BEEUMEN † , KARL MEERBERGEN † , AND WIM MICHIELS † Abstract. We propose a new uniform framework of Compact Rational Krylov (CORK) methods for solving large-scale nonlinear eigenvalue problems: A(λ)x = 0. For many years, linearizations are used for solving polynomial and rational eigenvalue problems. On the other hand, for the general nonlinear case, A(λ) can first be approximated by a (rational) matrix polynomial and then a convenient linearization is used. However, the major disadvantage of linearization based methods is the growing memory and orthogonalization costs with the iteration count, i.e., in general they are proportional to the degree of the polynomial. Therefore, the CORK family of rational Krylov methods exploits the structure of the linearization pencils by using a generalization of the compact Arnoldi decomposition. In this way, the extra memory and orthogonalization costs due to the linearization of the original eigenvalue problem are negligible for large-scale problems. Furthermore, we prove that each CORK step breaks down into an orthogonalization step of the original problem dimension and a rational Krylov step on small matrices. We also briefly discuss implicit restarting of the CORK method and how to exploit low rank structure. The CORK method is illustrated with two large-scale examples.
This paper proposes a new rational Krylov method for solving the nonlinear eigenvalue problem: A(λ)x = 0. The method approximates A(λ) by Hermite interpolation where the degree of the interpolating polynomial and the interpolation points are not fixed in advance. It uses a companion-type reformulation to obtain a linear generalized eigenvalue problem (GEP). To this GEP we apply a rational Krylov method that preserves the structure. The companion form grows in each iteration and the interpolation points are dynamically chosen. Each iteration requires a linear system solve with A(σ), where σ is the last interpolation point. The method is illustrated by small-and large-scale numerical examples. In particular, we illustrate that the method is fully dynamic and can be used as a global search method as well as a local refinement method. In the last case, we compare the method to Newton's method and illustrate that we can achieve an even faster convergence rate.
In [Van Beeumen, et. al, HPC Asia 2020, https://www.doi.org/10.1145/3368474. 3368497] a scalable and matrix-free eigensolver was proposed for studying the manybody localization (MBL) transition of two-level quantum spin chain models with nearestneighbor XX +Y Y interactions plus Z terms. This type of problem is computationally challenging because the vector space dimension grows exponentially with the physical system size, and averaging over different configurations of the random disorder is needed to obtain relevant statistical behavior. For each eigenvalue problem, eigenvalues from different regions of the spectrum and their corresponding eigenvectors need to be computed. Traditionally, the interior eigenstates for a single eigenvalue problem are computed via the shift-and-invert Lanczos algorithm. Due to the extremely high memory footprint of the LU factorizations, this technique is not well suited for large number of spins L, e.g., one needs thousands of compute nodes on modern high performance computing infrastructures to go beyond L = 24. The matrix-free approach does not suffer from this memory bottleneck, however, its scalability is limited by a computation and communication imbalance. We present a few strategies to reduce this imbalance and to significantly enhance the scalability of the matrix-free eigensolver. To optimize the communication performance, we leverage the consistent space runtime, CSPACER, and show its efficiency in accelerating the MBL irregular communication patterns at scale compared to optimized MPI non-blocking two-sided and one-sided RMA implementation variants. The efficiency and effectiveness of the proposed algorithm is demonstrated by computing eigenstates on a massively parallel many-core high performance computer.
This paper considers interpolating matrix polynomials P (λ) in Lagrange and Hermite bases. A classical approach to investigate the polynomial eigenvalue problem P (λ)x = 0 is linearization, by which the polynomial is converted into a larger matrix pencil with the same eigenvalues. Since the current linearizations of degree n Lagrange polynomials consist of matrix pencils with n + 2 blocks, they introduce additional eigenvalues at infinity. Therefore, we introduce new linearizations which overcome this. Initially, we restrict to Lagrange and barycentric Lagrange matrix polynomials and give two new and more compact linearizations, resulting in matrix pencils of n + 1 and n blocks for polynomials of degree n. For the latter, there is a one-to-one correspondence between the eigenpairs of P (λ) and the eigenpairs of the pencil. We also prove that these linearizations are strong. Moreover, we show how to exploit the structure of the proposed matrix pencils in Krylov-type methods, so that in this case we only have to deal with linear system solves of matrices of the original matrix polynomial dimension. Finally, we generalize for multiple interpolation and introduce new linearizations for Hermite Lagrange and barycentric Hermite matrix polynomials. Again, we can show that the linearizations are strong and that there is a one-to-one correspondence of the eigenpairs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.