2014
DOI: 10.1371/journal.pone.0091963
|View full text |Cite|
|
Sign up to set email alerts
|

cuTauLeaping: A GPU-Powered Tau-Leaping Stochastic Simulator for Massive Parallel Analyses of Biological Systems

Abstract: Tau-leaping is a stochastic simulation algorithm that efficiently reconstructs the temporal evolution of biological systems, modeled according to the stochastic formulation of chemical kinetics. The analysis of dynamical properties of these systems in physiological and perturbed conditions usually requires the execution of a large number of simulations, leading to high computational costs. Since each simulation can be executed independently from the others, a massive parallelization of tau-leaping can bring to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 30 publications
(29 citation statements)
references
References 78 publications
0
29
0
Order By: Relevance
“…The models were generated considering the methodology used in [30, 50], which was modified in order to randomly sample the initial concentration of each species with a uniform distribution in the range [0,1), and the kinetic constant of each reaction with a logarithmic distribution in the range [10 −8 ,1).…”
Section: Resultsmentioning
confidence: 99%
“…The models were generated considering the methodology used in [30, 50], which was modified in order to randomly sample the initial concentration of each species with a uniform distribution in the range [0,1), and the kinetic constant of each reaction with a logarithmic distribution in the range [10 −8 ,1).…”
Section: Resultsmentioning
confidence: 99%
“…Parallel computing paradigm may be used on multi-core CPUs, many-core processing units (such as, GPUs [77]), re-configurable hardware platforms (such as, FPGAs), or over distributed infrastructure (such as, cluster, Grid, or Cloud). While multi-core CPUs are suitable for general-purpose tasks, many-core processors (such as the Intel Xeon Phi [24] or GPU [85]) comprise a larger number of lower frequency cores and perform well on scalable applications (such as, DNA sequence analysis [71], biochemical simulation [53,76,81,123] or deep learning [129]).…”
Section: High Performance Computing and Big Datamentioning
confidence: 99%
“…3.1 for some examples). In this context, GPUs [77] were already successfully employed to achieve a considerable reduction in the computational times required by the simulation of both deterministic [53,76,123] and stochastic models [81,150]. Besides accelerating single simulations of such models, these methods prove to be particularly useful when there is a need of running multiple independent simulations of the same model.…”
Section: High Performance Computing and Big Datamentioning
confidence: 99%
“…Algorithm parallelization is usually realized by means of multi-threading [23], distributed computing on clusters [10], custom circuitry produced with Field Programmable Gate Array (FPGA) [14] or general-purpose Graphics Processing Units (GPU) computing [17,18,20]. These parallel technologies generally require a custom implementation of the algorithm, since most of the time CPU code cannot be directly ported on the parallel architecture; in addition, distributed architectures need the definition of an appropriate scheduler to manage the parallel execution of processes.…”
Section: Introductionmentioning
confidence: 99%
“…The second test case is a family of synthetic stochastic models of increasing size (SynSM, in short), which are randomly generated according to the methodology proposed in [20]. Namely, SynSM are characterized by a number of species N and of reactions M ranging from 20 × 20 to 240 × 240; the values of the stochastic constants are randomly sampled with uniform distribution in (0, 1).…”
Section: Introductionmentioning
confidence: 99%