12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference 2008
DOI: 10.2514/6.2008-6020
|View full text |Cite
|
Sign up to set email alerts
|

A Multifidelity Gradient-Free Optimization Method and Application to Aerodynamic Design

Abstract: The use of expensive simulations in engineering design optimization often rules out conventional techniques for design optimization for a variety of reasons, such as lack of smoothness, unavailability of gradient information, presence of multiple local optima, and most importantly, limits on available computing resources and time. Often, the designer also has access to lower-fidelity simulations that may suffer from poor accuracy in some regions of the design space, but are much cheaper to evaluate than the or… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
17
0
2

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(19 citation statements)
references
References 16 publications
0
17
0
2
Order By: Relevance
“…For such problems, 'multifidelity correction' methods have been developed, primarily in the optimization context. These techniques model the mapping µ → δ s using a data-fit surrogate; they either enforce 'global' zeroth-order consistency between the corrected surrogate prediction and the high-fidelity prediction at training points [23,29,33,39,35], or 'local' first-or second-order consistency at trust-region centers [2,19]. Such approaches tend to work well when the surrogate-model error exhibits a lower variance than the high-fidelity response [35] and the input-space dimension is small.Reduced-order models (ROMs) employ a projection process to reduce the state-space dimensionality of the high-fidelity computational model.…”
mentioning
confidence: 99%
“…For such problems, 'multifidelity correction' methods have been developed, primarily in the optimization context. These techniques model the mapping µ → δ s using a data-fit surrogate; they either enforce 'global' zeroth-order consistency between the corrected surrogate prediction and the high-fidelity prediction at training points [23,29,33,39,35], or 'local' first-or second-order consistency at trust-region centers [2,19]. Such approaches tend to work well when the surrogate-model error exhibits a lower variance than the high-fidelity response [35] and the input-space dimension is small.Reduced-order models (ROMs) employ a projection process to reduce the state-space dimensionality of the high-fidelity computational model.…”
mentioning
confidence: 99%
“…Instead, we could construct a parametric model that approximates f hi but is inexpensive to compute. Such a surrogate model can decomposition methods with deep corrections for reinforcement learning 7 represent the difference between the high-fidelity and the low-fidelity models [12], [21]:…”
Section: Policy Correctionmentioning
confidence: 99%
“…1 This may provide an advantage over single-fidelity pattern-search and simplex methods, especially when the dimension of the design space is large. For example, constraint-handling approaches used in conjunction with Efficient Global Optimization (EGO) (Jones et al 1998) either estimate the probability that a point is both a minimum and that it is feasible, or add a smooth penalty to the surrogate model prior to using optimization to select new high-fidelity sample locations (Sasena et al 2002;Jones 2001;Rajnarayan et al 2008). These heuristic approaches may work well in practice, but unfortunately have no guarantee of convergence to a minimum of the high-fidelity design problem.…”
Section: Introductionmentioning
confidence: 99%