2021
DOI: 10.1007/978-3-030-65351-4_29
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Control of Probabilistic Boolean Networks

Abstract: Probabilistic Boolean Networks (PBNs) were introduced as a computational model for studying gene interactions in Gene Regulatory Networks (GRNs). Controllability of PBNs, and hence GRNs, is the process of making strategic interventions to a network in order to drive it from a particular state towards some other potentially more desirable state. This is of significant importance to systems biology as successful control could be used to obtain potential gene treatments by making therapeutic interventions. Recent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(19 citation statements)
references
References 43 publications
0
19
0
Order By: Relevance
“…It achieves 100% control if allowed up to 10 perturbations. More details on this experiment found in [19].…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…It achieves 100% control if allowed up to 10 perturbations. More details on this experiment found in [19].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Existing work in control systems engineering typically restricts perturbations to a subset of nodes, typically the control nodes of a network [1], [16] or nodes that translate biologically, e.g., pirin and WNT5A driven induction of an invasive phenotype in melanoma cells [11], [17], [18]. The general case where perturbations are considered on the full set of nodes is less studied, with the exception of [19], even though it is relevant in contexts where control nodes are not available, or computationally intractable to obtain. Further, motivated by the biological properties found in various target states, different approaches perturb individual nodes’ states in a PBN in order to either drive it to some attractor within a finite number of steps ( horizon ), or change the network’s long-run behaviour by affecting its steady-state distribution (by increasing the mass probability of the target states).…”
Section: Introductionmentioning
confidence: 99%
“…The effectiveness of optimal control methods proposed in the literature was validated through the use of biological networks such as the 7-gene WNT5A network [29], [33], [46], [53], the 13-gene (9 state, 4 control) ARA OPERON network [34], and the 8-gene artificial network [55], among others. Neither approximation nor analytical PBCN optimal control approaches have been validated using a large biological network of 40 genes (37 states, three control), to the best of the authors' knowledge.…”
Section: B Discussionmentioning
confidence: 99%
“…The optimal control problem is formulated such that the expression of gene x 3 is deregulated at end of treatment horizon. This objective can be translated to find the control input that minimizes the cost (33) in finite horizon (t f = 2) case.…”
Section: ) Artificial 3-gene Networkmentioning
confidence: 99%
“…Imani et al used RL with Gaussian processes to achieve near-optimal infinite-horizon control of GRNs with uncertainty in both the interventions and measurements (Imani and Braga-Neto, 2017). Papagiannis et al introduced a novel learning approach to GRN control using a double deep Q network (double DQN) with prioritized experience replay and demonstrated successful results for larger GRNs than previous approaches (Papagiannis and Moschoyiannis, 2019a, 2019b). Although these applications of RL for reaching GRNs’ desirable attractors are related to our goal of switching attractors in continuous nonlinear dynamical systems, they are limited to random boolean networks, which have discrete state and action spaces.…”
Section: Introductionmentioning
confidence: 99%