2017
DOI: 10.1038/s41540-017-0023-2
|View full text |Cite
|
Sign up to set email alerts
|

Performance of objective functions and optimisation procedures for parameter estimation in system biology models

Abstract: Mathematical modelling of signalling pathways aids experimental investigation in system and synthetic biology. Ever increasing data availability prompts the development of large dynamic models with numerous parameters. In this paper, we investigate how the number of unknown parameters affects the convergence of three frequently used optimisation algorithms and four objective functions. We compare objective functions that use data-driven normalisation of the simulations with those that use scaling factors. The … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 65 publications
(73 citation statements)
references
References 43 publications
0
72
0
1
Order By: Relevance
“…Given that the overall computation time is thousands of CPU hours, this improvement is substantial. While previous studies had already shown a reduced convergence rate when calibrating models to relative data (Degasperi et al, 2017), we identified the large gradients with respect to the scalings as a possible explanation and established a flexible and easy way to circumvent them. The numerical stiffness which can arise from this for numerical optimization methods is the first conceptual explanation of the large improvements achieved by hierarchical methods (Weber et al, 2011;Loos et al, 2018).…”
Section: Discussionmentioning
confidence: 76%
See 2 more Smart Citations
“…Given that the overall computation time is thousands of CPU hours, this improvement is substantial. While previous studies had already shown a reduced convergence rate when calibrating models to relative data (Degasperi et al, 2017), we identified the large gradients with respect to the scalings as a possible explanation and established a flexible and easy way to circumvent them. The numerical stiffness which can arise from this for numerical optimization methods is the first conceptual explanation of the large improvements achieved by hierarchical methods (Weber et al, 2011;Loos et al, 2018).…”
Section: Discussionmentioning
confidence: 76%
“…A priori it is not clear which influence scaling, offset and noise parameter have on optimizer performance. However, Degasperi et al (2017) observed in two examples that the use of scalings lead to inferior optimizer behaviour compared to the normalization-based approach which was also used by Fröhlich et al (2018). Thus, before estimating parameters using real measured data from CCLE and MCLP, we first used simulated data.…”
Section: Evaluation Of Standard and Hierarchical Optimization Using Smentioning
confidence: 99%
See 1 more Smart Citation
“…Although it is possible to work with derivatives of kinetic equations, the increasing number of unknown kinetic parameters in the equations becomes a challenge for the modeling process. A set of experimental data are necessary to enable the indirect estimation of the unknown parameters …”
Section: Methodsmentioning
confidence: 99%
“…A set of experimental data are necessary to enable the indirect estimation of the unknown parameters. [25][26][27][28] The optimum parameter value is obtained when the NLLS objective function of this work is minimized, i.e. when the differences between experimental (X exp ) and simulated (X mod ) data are minimized, as given by Eqn (16): [29][30][31][32][33][34] […”
Section: Methodsmentioning
confidence: 99%