Computing rational minimax approximations can be very challenging when there are singularities on or near the interval of approximation -precisely the case where rational functions outperform polynomials by a landslide. We show that far more robust algorithms than previously available can be developed by making use of rational barycentric representations whose support points are chosen in an adaptive fashion as the approximant is computed. Three variants of this barycentric strategy are all shown to be powerful: (1) a classical Remez algorithm, (2) a "AAA-Lawson" method of iteratively reweighted least-squares, and (3) a differential correction algorithm. Our preferred combination, implemented in the Chebfun MINIMAX code, is to use (2) in an initial phase and then switch to (1) for generically quadratic convergence. By such methods we can calculate approximations up to type (80, 80) of |x| on [−1, 1] in standard 16-digit floating point arithmetic, a problem for which Varga, Ruttan, and Carpenter required 200-digit extended precision.Key words. barycentric formula, rational minimax approximation, Remez algorithm, differential correction algorithm, AAA algorithm, Lawson algorithm AMS subject classifications. 41A20, 65D151. Introduction. The problem we are interested in is that of approximating functions f ∈ C([a, b]) using type (m, n) rational approximations with real coefficients, in the L ∞ setting. The set of feasible approximations is(1.1)Given f and prescribed nonnegative integers m, n, the goal is to computewhere · ∞ denotes the infinity norm over [a, b], i.e., f − r ∞ = max x∈ [a,b] |f (x) − r(x)|. The minimizer of (1.2) is known to exist and to be unique [58, Ch. 24]. Let the minimax (or best) approximation be written r * = p/q ∈ R m,n , where p and q have no common factors. The number d = min {m − deg p, m − deg q} is called the defect of r * . It is known that there exists a so-called alternant (or reference) set consisting of ordered nodes a x 0 < x 1 < · · · < x m+n+1−d b, where f − r * takes
The multiplication by a constant is a frequently used operation. To implement it on Field Programmable Gate Arrays (FPGAs), the state of the art offers two completely different methods: one relying on bit shifts and additions/subtractions, and another one using look-up tables and additions. So far, it was unclear which method performs best for a given constant and input/output data types. The main contribution of this work is a thorough comparison of both methods in the main application contexts of constant multiplication: filters, signalprocessing transforms, and elementary functions. Most of the previous state of the art addresses multiplication by an integer constant. This work shows that, in most of these application contexts, a formulation of the problem as the multiplication by a real constant allows for more efficient architectures. Another contribution is a novel extension of the shift-and-add method to real constants. For that, an integer linear programming (ILP) formulation is proposed, which truncates each component in the shift-and-add network to a minimum necessary word size that is aligned with the approximation error of the coefficient. All methods are implemented within the open-source FloPoCo framework. 1 With FloPoCo version 4.1.2, try the command flopoco FPExp we=11 wf=53 and look in the produced VHDL for the signal absKLog2.
The usual way in which mathematicians work with randomness is by a rigorous formulation of the idea of Brownian motion, which is the limit of a random walk as the step length goes to zero. A Brownian path is continuous but nowhere differentiable, and this nonsmoothness is associated with technical complications that can be daunting. However, there is another approach to random processes that is more elementary, involving smooth random functions defined by finite Fourier series with random coefficients or, equivalently, by trigonometric polynomial interpolation through random data values. We show here how smooth random functions can provide a very practical way to explore random effects. For example, one can solve smooth random ordinary differential equations using standard mathematical definitions and numerical algorithms, rather than having to develop new definitions and algorithms of stochastic differential equations. In the limit as the number of Fourier coefficients defining a smooth random function goes to \infty , one obtains the usual stochastic objects in what is known as their Stratonovich interpretation.
With a long history dating back to the beginning of the 1970s, the Parks-McClellan algorithm is probably the most well-known approach for designing finite impulse response filters. Despite being a standard routine in many signal processing packages, it is possible to find practical design specifications where existing codes fail to work. Our goal is twofold. We first examine and present solutions for the practical difficulties related to weighted minimax polynomial approximation problems on multi-interval domains (i.e., the general setting under which the Parks-McClellan algorithm operates). Using these ideas, we then describe a robust implementation of this algorithm. It routinely outperforms existing minimax filter design routines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.