2015
DOI: 10.1002/nme.4820
|View full text |Cite
|
Sign up to set email alerts
|

Structure‐preserving, stability, and accuracy properties of the energy‐conserving sampling and weighting method for the hyper reduction of nonlinear finite element dynamic models

Abstract: The computational efficiency of a typical, projection-based, nonlinear model reduction method hinges on the efficient approximation, for explicit computations, of the scalar projections onto a subspace of a residual vector. For implicit computations, it also hinges on the additional efficient approximation of similar projections of the Jacobian of this residual with respect to the solution. The computation of both approximations is often referred to in the literature as hyper reduction. To this effect, this pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
233
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 243 publications
(234 citation statements)
references
References 41 publications
1
233
0
Order By: Relevance
“…The second aspect deals with the complexity reduction in the computation of certain nonlinear terms in the governing equations, which need to be updated for every evaluation in both, static and dynamic problems. In this aspect, usually referred to as Hyper Reduction, huge strides have been made recently in the context of Finite Elements [3,13].…”
Section: Introductionmentioning
confidence: 99%
“…The second aspect deals with the complexity reduction in the computation of certain nonlinear terms in the governing equations, which need to be updated for every evaluation in both, static and dynamic problems. In this aspect, usually referred to as Hyper Reduction, huge strides have been made recently in the context of Finite Elements [3,13].…”
Section: Introductionmentioning
confidence: 99%
“…The knowledge about the HDM response is obtained during a training procedure: the design parameter vector μ is sampled at a few points using an effective sampling strategy [5,79,160]. The projection of the μparametric nonlinear HDM of dimension m onto a subspace of dimension N << m is done by using an (m×N ) reduced-order basis [V ] independent of μ, which yields the μ-parametric nonlinear ROM (note that we have used the notation m instead of m DOF contrarily to the notation used in this chapter, because we need to simplify the reading of the mathematical symbols that we use hereinafter).…”
Section: Problem To Be Solved and Approach Proposedmentioning
confidence: 99%
“…Despite its low dimension, the resulting μ-parametric nonlinear ROM does not necessarily guarantee computational feasibility because the construction of the nonlinear ROM does not scale only with its size N , but also with that of the underlying μ-parametric nonlinear HDM, m >> N. A remedy consists in equipping the nonlinear ROM with a procedure for approximating the resulting reduced operators whose computational complexity scales only with the small size N of the ROM hyper-reduction method) [78,79,173,91,45]. Despite its low dimension, the resulting μ-parametric nonlinear ROM does not necessarily guarantee computational feasibility because the construction of the nonlinear ROM does not scale only with its size N , but also with that of the underlying μ-parametric nonlinear HDM, m >> N. A remedy consists in equipping the nonlinear ROM with a procedure for approximating the resulting reduced operators whose computational complexity scales only with the small size N of the ROM hyper-reduction method) [78,79,173,91,45].…”
Section: Problem To Be Solved and Approach Proposedmentioning
confidence: 99%
“…Nevertheless, for optimization with largescale nonlinear computational models, the introduction of parametric reduced-order models are often necessary (see for instance [3,24]). Concerning the algorithms for solving optimization problems under uncertainties, many methods have been proposed in the literature such as the gradientbased learning that is adapted to convex problems [48,89], the global search algorithms such as the stochastic algorithms, the genetic algorithm, and the evolutionary algorithms [14,46].…”
Section: Algorithms For Solving Optimization Problems Under Uncertainmentioning
confidence: 99%