In this volume, which was originally published in 1996, noisy information is studied in the context of computational complexity; in other words the text deals with the computational complexity of mathematical problems for which information is partial, noisy and priced. The author develops a general theory of computational complexity of continuous problems with noisy information and gives a number of applications; deterministic as well as stochastic noise is considered. He presents optimal algorithms, optimal information, and complexity bounds in different settings: worst case, average case, mixed worst-average and average-worst, and asymptotic. The book integrates the work of researchers in such areas as computational complexity, approximation theory and statistics, and includes many fresh results as well. About two hundred exercises are supplied with a view to increasing the reader's understanding of the subject. The text will be of interest to professional computer scientists, statisticians, applied mathematicians, engineers, control theorists, and economists.
We further develop the Multivariate Decomposition Method (MDM) for the Lebesgue integration of functions of infinitely many variables x 1 , x 2 , x 3 , . . . with respect to a corresponding product of a one dimensional probability measure. The method is designed for functions that admit a dominantly convergent decomposition f = u f u , where u runs over all finite subsets of positive integers, and for each u = {i 1 , . . . , i k } the function f u depends only on x i1 , . . . , x i k .Although a number of concepts of infinite-dimensional integrals have been used in the literature, questions of uniqueness and compatibility have mostly not been studied. We show that, under appropriate convergence conditions, the Lebesgue integral equals the 'anchored' integral, independently of the anchor.For approximating the integral, the MDM assumes that point values of f u are available for important subsets u, at some known cost. In this paper we introduce a new setting, in which it is assumed that each f u belongs to a normed space F u , and that bounds B u on f u Fu are known. This contrasts with the assumption in many papers that weights γ u , appearing in the norm of the infinite-dimensional function space, are somehow known. Often such weights γ u were determined by minimizing an error bound depending on the B u , the γ u and the chosen algorithm, resulting in weights that depend on the algorithm. In contrast, in this paper only the bounds B u are assumed known. We give two examples in which we specialize the MDM: in the first case F u is the |u|-fold tensor product of an anchored reproducing kernel Hilbert space, and in the second case it is a particular non-Hilbert space for integration over an unbounded domain. arXiv:1501.05445v3 [math.NA]
Abstract. Consider approximating functions based on a finite number of their samples. We show that adaptive algorithms are much more powerful than nonadaptive ones when dealing with piecewise smooth functions. More specifically, let F 1 r be the class of scalar functions f : [0, T ] → R whose derivatives of order up to r are continuous at any point except for one unknown singular point. We provide an adaptive algorithm A ad n that uses at most n samples of f and whose worst case L p error (1 ≤ p < ∞) with respect to 'reasonable' function classes F 1 r ⊂ F 1 r is proportional to n −r . On the other hand, the worst case error of any nonadaptive algorithm that uses n samples is at best proportional to n −1/p .The restriction to only one singularity is necessary for superiority of adaption in the worst case setting. Fortunately, adaption regains its power in the asymptotic setting even for a very general class F ∞ r consisting of piecewise C r -smooth functions, each having a finite number of singular points. For any f ∈ F ∞ r our adaptive algorithm approximates f with error converging to zero at least as fast as n −r . We also prove that the rate of convergence for nonadaptive methods cannot be better than n −1/p , i.e., is much slower.The results mentioned above do not hold if the errors are measured in the L ∞ norm, since no algorithm produces small L ∞ errors for functions with unknown discontinuities. However, we strongly believe that the L ∞ norm is inappropriate when dealing with singular functions and that the Skorohod metric should be used instead. We show that our adaptive algorithm retains its positive properties when the approximation error is measured in the Skorohod metric. That is, the worst case error with respect to F 1 r equals Θ(n −r ), and the convergence in the asymptotic setting for F ∞ r is n −r . Numerical results confirm the theoretical properties of our algorithms.
We study numerical integration I (f ) = T 0 f (x) dx for functions f with singularities. Nonadaptive methods are inefficient in this case, and we show that the problem can be efficiently solved by adaptive quadratures at cost similar to that for functions with no singularities.Consider first a class F r of functions whose derivatives of order up to r are continuous and uniformly bounded for any but one singular point. We propose adaptive quadratures Q * n , each using at most n function values, whose worst case errors sup f ∈F r I (f ) − Q * n (f ) are proportional to n −r . On the other hand, the worst case error of nonadaptive methods does not converge faster than n −1 .These worst case results do not extend to the case of functions with two or more singularities; however, adaption shows its power even for such functions in the asymptotic setting. That is, let F ∞ r be the class of r-smooth functions with arbitrary (but finite) number of singularities. Then a generalization of Q * n yields adaptive quadratures Q * * n such that I (f ) − Q * * n (f ) = O(n −r ) for any f ∈ F ∞ r . In addition, we show that for any sequence of nonadaptive methods there are 'many' functions in F ∞ r for which the errors converge no faster than n −1 . Results of numerical experiments are also presented. Mathematics Subject Classification (2000) 65D30 · 65D32
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.