2020
DOI: 10.1137/19m1257780
|View full text |Cite
|
Sign up to set email alerts
|

A Class of Fast and Accurate Summation Algorithms

Abstract: The need to sum floating-point numbers is ubiquitous in scientific computing. Standard recursive summation of n summands, often implemented in a blocked form, has a backward error bound proportional to nu, where u is the unit roundoff. With the growing interest in low precision floating-point arithmetic and ever increasing n in applications, computed sums are more likely to have insufficient accuracy. We propose a class of summation algorithms called FABsum (for fast and accurate block summation"") that applie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

2
37
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 33 publications
(39 citation statements)
references
References 20 publications
2
37
0
Order By: Relevance
“…3.1]) does not depend on the summands, the figure shows that the actual backward error strongly depends on the interval the data is sampled in. For the [0, 1] interval, the backward error is of order \surd nu, as predicted by the probabilistic bound, but for the [ - 1, 1] interval the error is much smaller, seemingly independent of n. Similar experiments showing strong variability in the error for different data distributions can be found in the literature [1], [3], [4], [5], [14], [20]. It is thus clear that the probabilistic bounds from [8] are not sharp for all data.…”
supporting
confidence: 74%
“…3.1]) does not depend on the summands, the figure shows that the actual backward error strongly depends on the interval the data is sampled in. For the [0, 1] interval, the backward error is of order \surd nu, as predicted by the probabilistic bound, but for the [ - 1, 1] interval the error is much smaller, seemingly independent of n. Similar experiments showing strong variability in the error for different data distributions can be found in the literature [1], [3], [4], [5], [14], [20]. It is thus clear that the probabilistic bounds from [8] are not sharp for all data.…”
supporting
confidence: 74%
“…3 The experiments were run in MATLAB 9.7 (2019b) using the Stochastic Rounding Toolbox we developed, also available on GitHub. 4 Reducedprecision floating-point formats were simulated on binary64 hardware using the MATLAB chop function [9].…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…This benefit should be understood in a statistical sense: stochastic rounding may produce an error larger than that of roundto-nearest on a single rounding operation, but over a large number of roundings it may help to obtain a more accurate result due to errors of different signs cancelling out. This rounding strategy is particularly effective at alleviating stagnation [4], a phenomenon that often occurs when computing the sum of a large number of terms that are small in magnitude. A sum stagnates when the summands become so small compared with the partial sum that their values are "swamped" [5], causing a dramatic increase in forward error.…”
mentioning
confidence: 99%
“…Efficient performance of algebraic operations in the framework of floating-point arithmetic is a subject of considerable importance [ 1 , 2 , 3 , 4 , 5 , 6 ]. Approximations of elementary functions are crucial in scientific computing, computer graphics, signal processing, and other fields of engineering and science [ 7 , 8 , 9 , 10 ].…”
Section: Introductionmentioning
confidence: 99%