We consider adaptive finite element methods for second-order elliptic PDEs, where the arising discrete systems are not solved exactly. For contractive iterative solvers, we formulate an adaptive algorithm which monitors and steers the adaptive mesh-refinement as well as the inexact solution of the arising discrete systems. We prove that the proposed strategy leads to linear convergence with optimal algebraic rates. Unlike prior works, however, we focus on convergence rates with respect to the overall computational costs. In explicit terms, the proposed adaptive strategy thus guarantees quasi-optimal computational time. In particular, our analysis covers linear problems, where the linear systems are solved by an optimally preconditioned CG method as well as nonlinear problems with strongly monotone nonlinearity which are linearized by the so-called Zarantonello iteration.
We consider the preconditioned conjugate gradient method (PCG) with optimal preconditioner in the frame of the boundary element method (BEM) for elliptic first-kind integral equations. Our adaptive algorithm steers the termination of PCG as well as the local mesh-refinement. Besides convergence with optimal algebraic rates, we also prove almost optimal computational complexity. In particular, we provide an additive Schwarz preconditioner which can be computed in linear complexity and which is optimal in the sense that the condition numbers of the preconditioned systems are uniformly bounded. As model problem serves the 2D or 3D Laplace operator and the associated weakly-singular integral equation with energy space H −1/2 (Γ). The main results also hold for the hyper-singular integral equation with energy space H 1/2 (Γ).
In the frame of isogeometric analysis, we consider a Galerkin boundary element discretization of the hyper-singular integral equation associated with the 2D Laplacian. We propose and analyze an adaptive algorithm which locally refines the boundary partition and, moreover, steers the smoothness of the NURBS ansatz functions across elements. In particular and unlike prior work, the algorithm can increase and decrease the local smoothness properties and hence exploits the full potential of isogeometric analysis. We prove that the new adaptive strategy leads to linear convergence with optimal algebraic rates. Numerical experiments confirm the theoretical results. A short appendix comments on analogous results for the weakly-singular integral equation.
We define and analyze (local) multilevel diagonal preconditioners for isogeometric boundary elements on locally refined meshes in two dimensions. Hypersingular and weakly-singular integral equations are considered. We prove that the condition number of the preconditioned systems of linear equations is independent of the mesh-size and the refinement level. Therefore, the computational complexity, when using appropriate iterative solvers, is optimal. Our analysis is carried out for closed and open boundaries and numerical examples confirm our theoretical results.
We consider a second-order elliptic boundary value problem with strongly monotone and Lipschitz-continuous nonlinearity. We design and study its adaptive numerical approximation interconnecting a finite element discretization, the Banach–Picard linearization, and a contractive linear algebraic solver. In particular, we identify stopping criteria for the algebraic solver that on the one hand do not request an overly tight tolerance but on the other hand are sufficient for the inexact (perturbed) Banach–Picard linearization to remain contractive. Similarly, we identify suitable stopping criteria for the Banach–Picard iteration that leave an amount of linearization error that is not harmful for the residual a posteriori error estimate to steer reliably the adaptive mesh-refinement. For the resulting algorithm, we prove a contraction of the (doubly) inexact iterates after some amount of steps of mesh-refinement/linearization/algebraic solver, leading to its linear convergence. Moreover, for usual mesh-refinement rules, we also prove that the overall error decays at the optimal rate with respect to the number of elements (degrees of freedom) added with respect to the initial mesh. Finally, we prove that our fully adaptive algorithm drives the overall error down with the same optimal rate also with respect to the overall algorithmic cost expressed as the cumulated sum of the number of mesh elements over all mesh-refinement, linearization, and algebraic solver steps. Numerical experiments support these theoretical findings and illustrate the optimal overall algorithmic cost of the fully adaptive algorithm on several test cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.