2017
DOI: 10.1137/16m108269x
|View full text |Cite
|
Sign up to set email alerts
|

Model Order Reduction Techniques with a Posteriori Error Control for Nonlinear Robust Optimization Governed by Partial Differential Equations

Abstract: We consider a nonlinear optimization problem governed by partial differential equations (PDE) with uncertain parameters. It is addressed by a robust worst case formulation. The resulting optimization problem is of bi-level structure and is difficult to treat numerically. We propose an approximate robust formulation that employs linear and quadratic approximations. To speed up the computation, reduced order models based on proper orthogonal decomposition (POD) in combination with a posteriori error estimators a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3

Relationship

4
3

Authors

Journals

citations
Cited by 29 publications
(31 citation statements)
references
References 51 publications
0
31
0
Order By: Relevance
“…3). The system matrix and right-hand side in (8) are decomposed into fixed submatrices and subvectors multiplied by coefficients that fully inherit the dependence on p. In this way, the submatrices and subvectors need to be assembled only once on beforehand, while the weighting coefficients are promptly computed by evaluating an analytical formula every time the geometry changes.…”
Section: Computation Of Pmsmsmentioning
confidence: 99%
See 1 more Smart Citation
“…3). The system matrix and right-hand side in (8) are decomposed into fixed submatrices and subvectors multiplied by coefficients that fully inherit the dependence on p. In this way, the submatrices and subvectors need to be assembled only once on beforehand, while the weighting coefficients are promptly computed by evaluating an analytical formula every time the geometry changes.…”
Section: Computation Of Pmsmsmentioning
confidence: 99%
“…In contrast, gradient-based methods typically converge fast, provided that the gradients of the objective and constraint function are given with a sufficient accuracy. Although this can be cumbersome for complicated functions, such preliminary computations are rewarded by a low iteration count of the embracing optimization procedure [8]. The comparably large number of uncertain parameters is efficiently treated by stochastic collocation on a Clenshaw-Curtis sparse grid [17], [18].…”
Section: Introductionmentioning
confidence: 99%
“…[39], a numerically feasible optimization problem is obtained. Also higher order expansions can be exploited [40], however, in this work only linearizations of the cost function and the constraint are considered:…”
Section: Robust Optimizationmentioning
confidence: 99%
“…However, they can be obtained analogously as described previously. Finally, this approach can be generalized to use a quadratic approximation with respect to P, see [40].…”
Section: Robust Optimizationmentioning
confidence: 99%
See 1 more Smart Citation