We consider model order reduction by proper orthogonal decomposition (POD) for parametrized partial differential equations, where the underlying snapshots are computed with adaptive finite elements. We address computational and theoretical issues arising from the fact that the snapshots are members of different finite element spaces. We propose a method to create a POD-Galerkin model without interpolating the snapshots onto their common finite element mesh. The error of the reduced-order solution is not necessarily Galerkin orthogonal to the reduced space created from space-adapted snapshot. We analyze how this influences the error assessment for POD-Galerkin models of linear elliptic boundary value problems. As a numerical example we consider a two-dimensional convection-diffusion equation with a parametrized convective direction. To illustrate the applicability of our techniques to non-linear timedependent problems, we present a test case of a two-dimensional viscous Burgers equation with parametrized initial data.
We consider model order reduction based on proper orthogonal decomposition (POD) for unsteady incompressible Navier-Stokes problems, assuming that the snapshots are given by spatially adapted finite element solutions. We propose two approaches of deriving stable POD-Galerkin reduced-order models for this context. In the first approach, the pressure term and the continuity equation are eliminated by imposing a weak incompressibility constraint with respect to a pressure reference space. In the second approach, we derive an inf-sup stable velocity-pressure reduced-order model by enriching the velocity reduced space with supremizers computed on a velocity reference space. For problems with inhomogeneous Dirichlet conditions, we show how suitable lifting functions can be obtained from standard adaptive finite element computations. We provide a numerical comparison of the considered methods for a regularized lid-driven cavity problem.
This work considers a weighted POD-greedy method to estimate statistical outputs parabolic PDE problems with parametrized random data. The key idea of weighted reduced basis methods is to weight the parameter-dependent error estimate according to a probability measure in the set-up of the reduced space. The error of stochastic finite element solutions is usually measured in a root mean square sense regarding their dependence on the stochastic input parameters. An orthogonal projection of a snapshot set onto a corresponding POD basis defines an optimum reduced approximation in terms of a Monte Carlo discretization of the root mean square error. The errors of a weighted POD-greedy Galerkin solution are compared against an orthogonal projection of the underlying snapshots onto a POD basis for a numerical example involving thermal conduction. In particular, it is assessed whether a weighted POD-greedy solutions is able to come significantly closer to the optimum than a non-weighted equivalent. Additionally, the performance of a weighted POD-greedy Galerkin solution is considered with respect to the mean absolute error of an adjoint-corrected functional of the reduced solution.
We study the iterative solution of linear systems of equations arising from stochastic Galerkin finite element discretizations of saddle point problems. We focus on the Stokes model with random data parametrized by uniformly distributed random variables. We introduce a Bramble-Pasciak conjugate gradient method as a linear solver. This method is associated with a block triangular preconditioner which must be scaled using a properly chosen parameter. We show how the existence requirements of such a conjugate gradient method can be met in our setting. As a reference solver, we consider a standard MINRES method, which is restricted to symmetric preconditioning. We analyze the performance of the two different solvers depending on relevant physical and numerical parameters by means of eigenvalue estimates. For this purpose, we derive bounds for the eigenvalues of the relevant preconditioned sub-matrices. We illustrate our findings using the flow in a driven cavity as a numerical test case, where the viscosity is given by a truncated Karhunen-Loève expansion of a random field. In this example, a Bramble-Pasciak conjugate gradient method with a block triangular preconditioner converges faster than a MINRES method with a comparable block diagonal preconditioner in terms of iteration counts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.