We introduce Mstab, a Krylov subspace recycling method for the iterative solution of sequences of linear systems, where the system matrix is fixed and is large, sparse, and nonsymmetric, and the right-hand-side vectors are available in sequence. Mstab utilizes the short-recurrence principle of induced dimension reduction-type methods, adapted to solve sequences of linear systems. Using IDRstab for solving the linear system with the first right-hand side, the proposed method then recycles the Petrov space constructed throughout the solution of that system, generating a larger initial space for subsequent linear systems. The richer space potentially produces a rapidly convergent scheme. Numerical experiments demonstrate that Mstab often enters the superlinear convergence regime faster than other Krylov-type recycling methods.1. Introduction. We consider iterative methods for the solution of sequences of large sparse nonsymmetric linear systemswith fixed nonsingular A ∈ C N ×N , where the right-hand sides b (ι) ∈ C N are provided in sequence. Such situations occur, for example, when applying an implicit time stepping scheme to numerically solve a transient partial differential equation (PDE). Relevant applications are, e.g., topology optimization [9], model reduction [6], structural dynamics [18], quantum chromodynamics [7], electrical circuit analysis [29], fluid dynamics [15], and optical tomography [13]. In all the aforementioned references a technique called Krylov subspace recycling (KSSR) is used.It is useful to start our discussion by establishing the notation and a few basic principles for solving a single linear system, Ax = b. Suppose x 0 is an initial guess of the solution, and let r 0 = b − Ax 0 be the initial residual. We define, as usual, the Krylov subspace of degree k ∈ N associated with A and r 0 as K k (A; r 0 ) := span{r 0 , Ar 0 , . . . , A k−1 r 0 }.Standard Krylov subspace methods solve a single linear system by searching in the kth iteration an approximate solution x k ∈ x 0 + K k (A; r 0 ) such that the residual r k *
In this review we present hyper-dual numbers as a tool for the automatic differentiation of computer programs via operator overloading.We start with a motivational introduction into the ideas of algorithmic differentiation. Then we illuminate the concepts behind operator overloading and dual numbers.Afterwards, we present hyper-dual numbers (and vectors) as an extension of dual numbers for the computation of the Jacobian and the Hessian matrices of a computer program. We review a mathematical theorem that proves the correctness of the derivative information that is obtained from hyper-dual numbers.Finally, we refer to a freely available implementation of a hyper-dual number class in Matlab. We explain an interface that can be called with a function as argument such that the Jacobian and Hessian of this function are returned.
In a recent work (arXiv-DOI: 1804.08072v1) we introduced the Modified Augmented Lagrangian Method (MALM) for the efficient minimization of objective functions with large quadratic penalty terms. From MALM there results an optimality equation system that is related to that of the original objective function. But, it is numerically better behaved, as the large penalty factor is replaced by a milder factor.In our original work, we formulated MALM with an inner iteration that applies a Quasi-Newton iteration to compute the root of a multivariate function. In this note we show that this formulation of the scheme with a Newton iteration can be replaced conveniently by formulating a well-scaled unconstrained minimization problem.In this note, we briefly review the Augmented Lagrangian Method (ALM) for minimizing equality-constrained problems. Then we motivate and derive the new proposed formulation of MALM for minimizing unconstrained problems with large quadratic penalties. Eventually, we discuss relations between MALM and ALM.
We are concerned with the fastest possible direct numerical solution algorithm for a thin-banded or tridiagonal linear system of dimension N on a distributed computing network of N nodes that is connected in a binary communication tree. Our research is driven by the need for faster ways of numerically solving discretized systems of coupled one-dimensional blackbox boundary-value problems.Our paper presents two major results: First, we provide an algorithm that achieves the optimal parallel time complexity for solving a tridiagonal linear system and thin-banded linear systems. Second, we prove that it is impossible to improve the time complexity of this method by any polynomial degree.To solve a system of dimension m • N and bandwidth m ∈ Ω(N 1/6 ) on 2 • N − 1 computing nodes, our method needs time complexity O(log(N ) 2 • m 3 ).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.