2018
DOI: 10.1101/272005
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Optimization and uncertainty analysis of ODE models using 2nd order adjoint sensitivity analysis

Abstract: MotivationParameter estimation methods for ordinary differential equation (ODE) models of biological processes can exploit gradients and Hessians of objective functions to achieve convergence and computational efficiency. However, the computational complexity of established methods to evaluate the Hessian scales linearly with the number of state variables and quadratically with the number of parameters. This limits their application to low-dimensional problems.ResultsWe introduce second order adjoint sensitivi… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 40 publications
0
8
0
Order By: Relevance
“…To minimize the loss function L with respect to target data, we use the adjoint state of the IDE in Equation 1, cf. [10,31], or backpropagate directly through the IDE solver. The adjoint function is defined as a(t) := ∂L ∂y and its dynamics is determined by another system of IDEs, since it is obtained by differentiating the loss function evaluated on the output of the IDE solver.…”
Section: Learning Dynamics With Integro-differential Equationsmentioning
confidence: 99%
See 2 more Smart Citations
“…To minimize the loss function L with respect to target data, we use the adjoint state of the IDE in Equation 1, cf. [10,31], or backpropagate directly through the IDE solver. The adjoint function is defined as a(t) := ∂L ∂y and its dynamics is determined by another system of IDEs, since it is obtained by differentiating the loss function evaluated on the output of the IDE solver.…”
Section: Learning Dynamics With Integro-differential Equationsmentioning
confidence: 99%
“…In order to update the parameters of the neural networks of the NIDE, we need to consider the augmented system (cf. [10,31]). The augmented state a aug (t) is obtained by considering the augmented IDE, where y aug = [y(t)|θ] is obtained by concatenating the parameters of F and K to y(t).…”
Section: Appendix a Integro-differential Equationsmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to ensure high calculation accuracy, the step size of ODE Solver is actually set as a small value, which indicates that directly using the gradient back propagation algorithm to calculate the loss function will introduce a large calculation cost. To solve this problem, the adjoint method is proposed in [9] to transform the gradient calculation into an ODE problem, which can be solved by the ODE Solver with low computational cost. To handle the calculation error introduced in the adjoint method in [9], the adaptive checkpoint adjoint method is further proposed in [10] with adding a small amount of storage cost.…”
Section: A Overview Of Neural Ode and Ode-rnnmentioning
confidence: 99%
“…For ODEs, we can explicitly write a solver to simulate the dynamical system. Following (Chen et al, 2018), if the ODE has too many parameters or the discretisation appears to affect the quality of the solution, we can learn gradients in a memory-efficient way through the adjoint method (Stapor et al, 2018). The same simulation applies to white-box and black-box methods, and the difference is contained in the function f θ .…”
Section: Simulation: Modified Eulermentioning
confidence: 99%