2010
DOI: 10.1109/tmag.2010.2044770
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Differentiation Applied for Optimization of Dynamical Systems

Abstract: Abstract-Simulation is ubiquitous in many scientific areas. Applied for dynamic systems usually by employing differential equations, it gives the time evolution of system states. In order to solve such problems, numerical integration algorithms are often required. Automatic Differentiation (AD) is introduced as a powerful technique to compute derivatives of functions given in the form of computer programs in a high level programming language such as FORTRAN, C or C++. Such technique fits perfectly in combinati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(10 citation statements)
references
References 8 publications
0
9
0
Order By: Relevance
“…Theorem 2 Let initial data y n and dy n d p be given. The algorithmic derivative of a single step of the scheme ( 14), (15) with step size h applied to (8) yields the same value dy n+1 d p as an application of the same integration step to the combined system ( 8) and ( 9) as long as dh d p = 0 and as long as the derivatives of equation solves are recovered according to the implicit function theorem. In terms of automatic differentiation, it is sufficient if h does not carry derivative values and equation solves are treated as elementary operations.…”
Section: Rosenbrock and Runge-kutta Schemes With Adaptive Step Sizementioning
confidence: 99%
See 1 more Smart Citation
“…Theorem 2 Let initial data y n and dy n d p be given. The algorithmic derivative of a single step of the scheme ( 14), (15) with step size h applied to (8) yields the same value dy n+1 d p as an application of the same integration step to the combined system ( 8) and ( 9) as long as dh d p = 0 and as long as the derivatives of equation solves are recovered according to the implicit function theorem. In terms of automatic differentiation, it is sufficient if h does not carry derivative values and equation solves are treated as elementary operations.…”
Section: Rosenbrock and Runge-kutta Schemes With Adaptive Step Sizementioning
confidence: 99%
“…In [62], fixed step size explicit Runge-Kutta methods are applied for the discretization of optimal control problems and it is shown that the sensitivities obtained by blackbox forward AD are consistent with the corresponding tangent linear model. [15] reports on an application of AD to explicit Runge-Kutta methods with adaptive step size control. Further research on AD of ODE integration schemes was conducted in an optimal control context with a focus on the reverse mode of AD [48,49], including adaptive step sizes [2] and reverse mode specifics such as interpolation strategies for the forward solution [1].…”
Section: Introductionmentioning
confidence: 99%
“…In the second case, automatic derivation is used [6]. In CADES, the analytical models are translated into software components that allow the computation of their outputs and their jacobian according to their inputs.…”
Section: Modelling Approachmentioning
confidence: 99%
“…-using Newton-Raphson algorithm (NR) that gives the set of solving parameters at each iteration of the optimization. The derivative of such algorithm is done using the implicit function theorem [6].…”
Section: Magnetic Model R(t)i² Thermal Model (Joule Losses)mentioning
confidence: 99%
“…Over the past several decades, automatic differentiation has been explored across a number of contexts [12][13][14][15][16][17] . However, in recent years the growing interest in machine learning, and particularly gradient-based model training, has driven the development modern automatic differentiation libraries.…”
Section: Introductionmentioning
confidence: 99%