2022
DOI: 10.3389/fnetp.2022.826345
|View full text |Cite
|
Sign up to set email alerts
|

RateML: A Code Generation Tool for Brain Network Models

Abstract: Whole brain network models are now an established tool in scientific and clinical research, however their use in a larger workflow still adds significant informatics complexity. We propose a tool, RateML, that enables users to generate such models from a succinct declarative description, in which the mathematics of the model are described without specifying how their simulation should be implemented. RateML builds on NeuroML’s Low Entropy Model Specification (LEMS), an XML based language for specifying models … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…In this use case, we utilize an Euler based solver. RateML (van der Vlag et al, 2022 ), the model generator of TVB, enables us to create the desired TVB model written in CUDA for the GPU and a driver to simulate the model, from a high level model XML file.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this use case, we utilize an Euler based solver. RateML (van der Vlag et al, 2022 ), the model generator of TVB, enables us to create the desired TVB model written in CUDA for the GPU and a driver to simulate the model, from a high level model XML file.…”
Section: Resultsmentioning
confidence: 99%
“…We used this optimizer to find the best parameter setting for a TVB model such that the match between simulated functional and structural connectivity is optimal. Results from performance testing for the RateML (van der Vlag et al, 2022 ) models show that for a double state model such as the , on a GPU with 40 GB of memory, up to ≈62, 464 (61 times more parameters), can be simulated in a single generation, taking approximately the same amount of wall time due to the architecture of the GPU. This would reduce the time it takes for each generation and increases the range and resolution of the to be optimized processes even further; opening up possibilities for experiments requiring greater computational power.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Python pygpc [51] Model-independent sensitivity and uncertainty analysis toolbox Python DDE-BIFTOOL [43] Toolbox for numerical parameter continuation and bifurcation analysis of delayed differential equation systems This makes PyRates similar to other code generation tools such as Brian [19], ANNarchy [50], RateML [48], NESTML [38], or NeuroML [28]. All of these tools generate code from user-defined model equations and are designed for numerical integration of neurodynamic models.…”
Section: Pythonmentioning
confidence: 99%
“…Here, we report that our GPU implementation of the TVB-AdEx model substantially ameliorates the original CPU instantiation by accelerating model performance and facilitating post data-analysis. The TVB-HPC framework is not another platform; it represents an expanded framework built upon RateML, TVB's [21] model generator [22]. RateML is performant, modular, reusable, and outperforms solutions such as neurolib [11], FastTVB [23], and Pyrates [24] in terms of magnitude of explorable parameter space and concurrent TVB instances.…”
Section: Introductionmentioning
confidence: 99%