Matrices coming from elliptic Partial Differential Equations have been shown to have a low-rank property which can be efficiently exploited in multifrontal solvers to provide a substantial reduction of their complexity. Among the possible low-rank formats, the Block Low-Rank format (BLR) is easy to use in a general purpose multifrontal solver and its potential compared to standard (full-rank) solvers has been demonstrated. Recently, new variants have been introduced and it was proved that they can further reduce the complexity but their performance has never been analyzed. In this paper, we present a multithreaded BLR factorization, and analyze its efficiency and scalability in shared-memory multicore environments. We identify the challenges posed by the use of BLR approximations in multifrontal solvers and put forward several algorithmic variants of the BLR factorization that overcome these challenges by improving its efficiency and scalability. We illustrate the performance analysis of the BLR multifrontal factorization with numerical experiments on a large set of problems coming from a variety of real-life applications.Additional Key Words and Phrases: sparse linear algebra, multifrontal factorization, Block Low-Rank, multicore architectures ACM Reference Format:
Traditional rounding error analysis in numerical linear algebra leads to backward error bounds involving the constant \gamma n = nu/(1 -nu), for a problem size n and unit roundoff u. In light of large-scale and possibly low-precision computations, such bounds can struggle to provide any useful information. We develop a new probabilistic rounding error analysis that can be applied to a wide range of algorithms. By using a concentration inequality and making probabilistic assumptions about the rounding errors, we show that in several core linear algebra computations \gamma n can be replaced by a relaxed constant \widetil \gamma n proportional to \surd n log n u with a probability bounded below by a quantity independent of n. The new constant \widetil \gamma n grows much more slowly with n than \gamma n. Our results have three key features: they are backward error bounds; they are exact, not first order; and they are valid for any n, unlike results obtained by applying the central limit theorem, which apply only as n \rightar \infty . We provide numerical experiments that show that for both random and real-life matrices the bounds can be much smaller than the standard deterministic bounds and can have the correct asymptotic growth with n. We also identify two special situations in which the assumptions underlying the analysis are not valid and the bounds do not hold. Our analysis provides, for the first time, a rigorous foundation for the rule of thumb that``one can take the square root of an error constant because of statistical effects in rounding error propagation.""
Stochastic rounding rounds a real number to the next larger or smaller floating-point number with probabilities 1 minus the relative distances to those numbers. It is gaining attention in deep learning because it can increase the success of low precision computations. We compare basic properties of stochastic rounding with those for round to nearest, finding properties in common as well as significant differences. We prove that for stochastic rounding the rounding errors are mean independent random variables with zero mean. We derive a new version of our probabilistic error analysis theorem from [N. J. Higham and T. Mary, SIAM J. Sci. Comput., 41 (2019), pp. A2815--A2835], weakening the assumption of independence of the random variables to mean independence. These results imply that for a wide range of linear algebra computations the backward error for stochastic rounding is unconditionally bounded by a multiple of \surd nu to first order, with a certain probability, where n is the problem size and u is the unit roundoff. This is the first scenario where the rule of thumb that one can replace nu by \surd nu in a rounding error bound has been shown to hold without any additional assumptions on the rounding errors. We also explain how stochastic rounding avoids the phenomenon of stagnation in sums, whereby small addends are obliterated by round to nearest when they are too small relative to the sum.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.