2008
DOI: 10.1016/j.jcp.2008.03.013
|View full text |Cite
|
Sign up to set email alerts
|

Adjoint-based optimization of PDE systems with alternative gradients

Abstract: In this work we investigate a technique for accelerating convergence of adjoint-based optimization of PDE systems based on a nonlinear change of variables in the control space. This change of variables is accomplished in the differentiate-then-discretize approach by constructing the descent directions in a control space not equipped with the Hilbert structure. We show how such descent directions can be computed in general Lebesgue and Besov spaces, and argue that in the Besov space case determination of descen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0
1

Year Published

2009
2009
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 28 publications
(29 citation statements)
references
References 30 publications
0
28
0
1
Order By: Relevance
“…As reported by Protas [39], the gradient is identified in a given space for which the metric had been selected. To do so, the two following definitions are needed.…”
Section: Space-dependent Sobolev Cost Function Gradientmentioning
confidence: 99%
See 2 more Smart Citations
“…As reported by Protas [39], the gradient is identified in a given space for which the metric had been selected. To do so, the two following definitions are needed.…”
Section: Space-dependent Sobolev Cost Function Gradientmentioning
confidence: 99%
“…This way for writing down the cost function gradient is the ordinary one as first suggested by Lions [46] and Céa [47]. However, according to Protas [39], it is suggested to use other inner product definitions due to the poor scaling of the corresponding discrete optimization problem. As reported in [48], the incorporation of such derivatives into the inner product, instead of using the 2 inner product, has the effect of scale-dependent filtering and allows one to extract smoother gradients, thereby preconditioning the optimization process.…”
Section: Space-dependent Sobolev Cost Function Gradientmentioning
confidence: 99%
See 1 more Smart Citation
“…For measuring the effects of the noise level, we used D 0.5, 0.3, and 0.1 with a fixed iteration as a stopping rule, as shown in Figure 6. Two heat source functions are used: a symmetric sinusoidal curve-like function: The noise removal in the measured data can be accomplished by the so-called Sobolev gradient method [37]. The Sobolev gradient contribution rJ S 2 H 1 0 OE0, 1 can be obtained from the corresponding Euclidean contribution rJ 2 L 2 OE0, 1 (see Equation (20) or Equation (21)) by solving the following elliptic boundary-value (in time) problem with homogeneous Neumann boundary condition:…”
Section: Effect Of Noise Levelmentioning
confidence: 99%
“…where n denotes the unit outward normal vector on @OE0, 1 and 2 R is a regularization parameter representing the cut-off length-scale later, which the gradient information is essentially filtered out as result of solving Equation (26) for rJ S ; see [37]. For spatially discretizing Equation (26), a 1D piecewise linear finite element is used.…”
Section: Effect Of Noise Levelmentioning
confidence: 99%