Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Optimal error bounds for adaptive and nonadaptive numerical methods are compared. Since the class of adaptive methods is much larger, a well-chosen adaptive method might seem to be better than any nonadaptive method. Nevertheless there are several results saying that under natural assumptions adaptive methods are not better than nonadaptive ones. There are also other results, however, saying that adaptive methods can be significantly better than nonadaptive ones as well as bounds on how much better they can be. It turns out that the answer to the ''adaption problem'' depends very much on what is known a priori about the problem in question; even a seemingly small change of the assumptions can lead to a different answer. © 1996 Academic Press, Inc. THE ADAPTION PROBLEMOne of the more controversial issues in numerical analysis concerns adaptive algorithms. The use of such algorithms is widespread and many people believe that well-chosen adaptive algorithms are much better than nonadaptive methods in most situations. Such a belief is usually based on numerical experimentation. In this paper we survey what is known theoretically regarding the power of adaption. We will present some results which state that under natural assumptions adaptive methods are not better than nonadaptive ones. There are also other results, however, saying that adaptive methods can be significantly superior to nonadaptive ones. As we will see, the power of adaption is critically dependent on our a priori knowledge concerning the problem being studied; even a seemingly small change in the assumptions can lead to a different answer. ERICH NOVAKLet us begin with some well-known examples. The bisection method and the Newton method for zero finding of a function are adaptive, since they compute a sequence (x n ) n of knots that depends on the function. The Gauss formula for numerical integration is nonadaptive since its knots and weights do not depend on the function.A nonadaptive method provides an immediate decomposition for parallel computation. If adaptive information is superior to nonadaptive information, then an analysis of the tradeoff between using adaptive or nonadaptive information on a parallel computer should be carried out.To formulate the adaption problem precisely, we need some definitions and notations. Many problems of numerical analysis can be described as computing an approximation of the value S( f ) of an operatorHere we assume that X is a normed space of functions and G is also a normed space. The operator S describes the solution of a mathematical problem, for example the solution of a boundary value problem or an integral equation. Also, numerical integration (with G ϭ R) and the recovery of functions (with an imbedding S ϭ id: X Ǟ L p , where X ʚ L p ) can be stated in this way. In many cases the space X is infinite dimensional and therefore f ʦ X cannot directly be an input of a computation. We usually replace S with a discretization method given, for example, by a finite element method. Accordingly, numerical methods a...
Optimal error bounds for adaptive and nonadaptive numerical methods are compared. Since the class of adaptive methods is much larger, a well-chosen adaptive method might seem to be better than any nonadaptive method. Nevertheless there are several results saying that under natural assumptions adaptive methods are not better than nonadaptive ones. There are also other results, however, saying that adaptive methods can be significantly better than nonadaptive ones as well as bounds on how much better they can be. It turns out that the answer to the ''adaption problem'' depends very much on what is known a priori about the problem in question; even a seemingly small change of the assumptions can lead to a different answer. © 1996 Academic Press, Inc. THE ADAPTION PROBLEMOne of the more controversial issues in numerical analysis concerns adaptive algorithms. The use of such algorithms is widespread and many people believe that well-chosen adaptive algorithms are much better than nonadaptive methods in most situations. Such a belief is usually based on numerical experimentation. In this paper we survey what is known theoretically regarding the power of adaption. We will present some results which state that under natural assumptions adaptive methods are not better than nonadaptive ones. There are also other results, however, saying that adaptive methods can be significantly superior to nonadaptive ones. As we will see, the power of adaption is critically dependent on our a priori knowledge concerning the problem being studied; even a seemingly small change in the assumptions can lead to a different answer. ERICH NOVAKLet us begin with some well-known examples. The bisection method and the Newton method for zero finding of a function are adaptive, since they compute a sequence (x n ) n of knots that depends on the function. The Gauss formula for numerical integration is nonadaptive since its knots and weights do not depend on the function.A nonadaptive method provides an immediate decomposition for parallel computation. If adaptive information is superior to nonadaptive information, then an analysis of the tradeoff between using adaptive or nonadaptive information on a parallel computer should be carried out.To formulate the adaption problem precisely, we need some definitions and notations. Many problems of numerical analysis can be described as computing an approximation of the value S( f ) of an operatorHere we assume that X is a normed space of functions and G is also a normed space. The operator S describes the solution of a mathematical problem, for example the solution of a boundary value problem or an integral equation. Also, numerical integration (with G ϭ R) and the recovery of functions (with an imbedding S ϭ id: X Ǟ L p , where X ʚ L p ) can be stated in this way. In many cases the space X is infinite dimensional and therefore f ʦ X cannot directly be an input of a computation. We usually replace S with a discretization method given, for example, by a finite element method. Accordingly, numerical methods a...
Previous work on thc ê-comple.对ty 01' elliptic boundary-valu巳 problems LII = f assumed 由 at the class F of problem elements f was the unit baU of a Sobolev space. In a recent paper, we considcred the case of a model two-point boundary-value problem, with F b巳ing a class 01' analylic functions. ln this paper, we ask what happcns if F is a class of piecewise analytic functions. Wc lìnd tbat the complex.ity depends strongly on how much a priori infomlation we have about the br,臼均oints. If the location of the brea[., :pnints is kno\Vn, tben the s-complexity is proportionalωln(S-l) , and there is a 自 nite element p-method (in the sense of Babuska) whosc cost is optimal to within a constant factor. If we kno\V neilher tbe location nor the numl咒r of brcakpoi盹. then tbe problem is unsolvable for ê < )2. If we know only that thcre are b ~ 2 br, ιakpoinL~, hlJt we don't know Lheir location, then theε-complexity is proportional to bs-L , and a tinite element h.method is nearly nptimal. In short, knowing tl 1e location ()f thc breakpoints is as good as kno\Ving that the problem elemcnts are analytic, where'ls nnly knowing lhc number of breakpoints is no be阳 than knowing that the problem elements havc a boumkd derivarive in Ihe L: scn 之巳. JNTRODUCTIONMost work on the e-complexity of elliptic houndary-value problems Lu = f has assumed that the c1 ass F of problem elements f consisted of functions whose smoothlless w出 fixed and known. sce. e.g.. [6]. In particular, if F is the unit ball of a Sobolev space. then comp(e) is a power of rl; moreover. we found conditions Ùlat are necessary and 川lffìcient 1' or a IÙlile element h-method l to be (almost) optimal.Unfortunately. assuIlling that F is the unit ball (一川、 a Sobolev space of fixed smooÙlI1ess means 出at we . must know 由c smoothness in advω1Ce. 111 praclicc. this Illay often be difficult. One possible way around this problem is to nOLe 山 at prohlem element.s are oflen ei 山er~Ul alytic or piecewise analytic. If we restrict ourselves to such f. then we c1011't have to wony SO Ill uch about quantifying the exact smooÙllless of f.Moreover. any lack of Sllloothncss can be contìned to a small set of points.In an earlier paper [7]. we looked at tJ1e case of analytic F [or a simple model two-point boundary-value problem. These results were encouraging. RaÙlcr ÙUUl depending on a power of e-1 • we found 由at the e-complexity was proportional to In(e-1 ) or to ln 2 (ε-1). dcpendi ng 011 whether or not there was .'breathing room" between the dOlllain on which lhe problem was defined and tl1e interior of ùle domain of analyticity of This research was supported in part by th巳 National Science Fonndation under Grant CCR• 91-01149. 1 Here we use tbe widely-used c1assi自 cation of finitc clemcnt mcthods that was introduced by Babuska and hìs colleagues:(1) h-metbods, in which the d巳gre已。 f thc tìnite clel 11cnl I11cthod is hdd lìxed and the partition vaIiω(these arc tbc usual finite element rnethods) , (2) p-methods, in which the partition is fixcd aml the degree is allowed to vary.(3) 仙, p)-me...
Let F be a class of functions de ned on a d-dimensional domain. Our task is to compute H m-norm "-approximations to solutions of 2mth-order elliptic boundary-value problems Lu = f for a xed L and for f 2 F. W e assume that the only information we can compute about f 2 F is the value of a nite number of continuous linear functionals of f, e a c h e v aluation having cost c(d). Previous work has assumed that F was the unit ball of a Sobolev space H r of xed smoothness r, and it was found that the complexity o f computing an "-approximation was comp(" d) = (c(d)(1=") d=(r+m)). Since the exponent of 1=" depends on d, w e see that the problem is intractable in 1=" for any s u c h F of xed smoothness r. In this paper, we ask whether we can break intractability b y letting F be the unit ball of a space of in nite smoothness. To be speci c, we l e t F b e t h e u n i t b a l l o f a Hardy space of analytic functions de ned over a complex d-dimensional ball of radius greater than one. We t h e n s h o w that the problem is tractable in 1=". More precisely, w e p r o ve that comp(" d) = (c(d)(ln 1=") d), where the-constant depends on d. Since for any p > 0, there is a function K() s u c h that comp(" d) c(d)K(d)(1=") p for su ciently small ", w e see that the problem is tractable, with (minimal) exponent 0. F urthermore, we s h o w h o w to construct a nite element p-method (in the sense of Babu ska) that can compute an "-approximation with cost (c(d)(ln 1=") d). Hence this nite element method is a nearly optimal complexity algorithm for d-dimensional elliptic problems with analytic data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.