2019
DOI: 10.48550/arxiv.1902.04562
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Topology Optimization under Uncertainty using a Stochastic Gradient-based Approach

Abstract: Topology optimization under uncertainty (TOuU) often defines objectives and constraints by statistical moments of geometric and physical quantities of interest. Most traditional TOuU methods use gradient-based optimization algorithms and rely on accurate estimates of the statistical moments and their gradients, e.g., via adjoint calculations. When the number of uncertain inputs is large or the quantities of interest exhibit large variability, a large number of adjoint (and/or forward) solves may be required to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 72 publications
0
11
0
Order By: Relevance
“…, N }, and the derivative ∂Ji k ∂p k is calculated using back propagation [2,61]. We, however, use an improved variant of SGD, namely, the Adaptive Moment Estimation (Adam) algorithm [62,63]. Adam leverages past gradient information to retard the descent along large gradients.…”
Section: Training a Neural Networkmentioning
confidence: 99%
“…, N }, and the derivative ∂Ji k ∂p k is calculated using back propagation [2,61]. We, however, use an improved variant of SGD, namely, the Adaptive Moment Estimation (Adam) algorithm [62,63]. Adam leverages past gradient information to retard the descent along large gradients.…”
Section: Training a Neural Networkmentioning
confidence: 99%
“…where ûq and vq denote the incremental state and incremental adjoint, respectively. By taking the variation of (31) with respect to the adjoint v q and using (2), we obtain the incremental state problem: find ûq ∈ U such that ṽ, ∂ vu rq ûq = − ṽ, ∂ vm rq mq , ∀ṽ ∈ V,…”
Section: Computation Of the Gradient And Hessian Actionmentioning
confidence: 99%
“…where the derivatives ∂ vu rq : U → V and ∂ vm rq : M → V are linear operators. The incremental adjoint problem, obtained by taking variation of (31) with respect to the state u and using (2), reads:…”
Section: Computation Of the Gradient And Hessian Actionmentioning
confidence: 99%
See 2 more Smart Citations