2016
DOI: 10.1371/journal.pcbi.1005157
|View full text |Cite
|
Sign up to set email alerts
|

Fused Regression for Multi-source Gene Regulatory Network Inference

Abstract: Understanding gene regulatory networks is critical to understanding cellular differentiation and response to external stimuli. Methods for global network inference have been developed and applied to a variety of species. Most approaches consider the problem of network inference independently in each species, despite evidence that gene regulation can be conserved even in distantly related species. Further, network inference is often confined to single data-types (single platforms) and single cell types. We intr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
32
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 41 publications
(34 citation statements)
references
References 54 publications
2
32
0
Order By: Relevance
“…Furthermore, normalizing batches to equal transcript depth risks suppressing differences which are true biological variability. An alternative approach is to treat the cells from each environmental condition as separate tasks, and then jointly learn a network using a multitask learning (MTL) framework Lam et al, 2016) . We find that our multi-task network inference (MTL) procedure , which we named Adaptive Multiple Sparse Regression (AMuSR), improves the quality of the network inference and increases the size of the network recovered ( Figure 5D).…”
Section: Multi-task Learning Improves Network Inference and Enables Rmentioning
confidence: 99%
“…Furthermore, normalizing batches to equal transcript depth risks suppressing differences which are true biological variability. An alternative approach is to treat the cells from each environmental condition as separate tasks, and then jointly learn a network using a multitask learning (MTL) framework Lam et al, 2016) . We find that our multi-task network inference (MTL) procedure , which we named Adaptive Multiple Sparse Regression (AMuSR), improves the quality of the network inference and increases the size of the network recovered ( Figure 5D).…”
Section: Multi-task Learning Improves Network Inference and Enables Rmentioning
confidence: 99%
“…where the first term is the basic NCA model [10] (A ∈ R m×l is the TF activity for m TFs in l samples) and the second and third terms are standard regularization terms and the last term involving 0 norm that is able to induce sparsity of the given prior network. Therefore, solving (6) would yield a refined GRN that only retains key edges from the prior network. The details of the sparse NCA-based network remodelling model is illustrated in Fig.…”
Section: Netrex-cf Modelmentioning
confidence: 99%
“…The first inequality comes from the fact that ∇H is Lipschitz continuous on bounded subset R n ×R m as assumed in Assumption 1 (6). The optimality condition for (27), we have…”
Section: B2 Convergence Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…In 38 these methods [17,21], priors or constraints on network structure (derived from multiple 39 sources like known interactions, ATAC-seq, DHS, or ChIP-seq experiments [22][23][24]) are 40 used to influence the penalty on adding model components, where edges in the prior are 41 effectively penalized less. Here we describe a method that builds on that work (and similar 42 work in other fields), but in addition we let model inference processes (each carried out 43 using a separate data-set) influence each others model penalties, so that edges that agree 44 across inference tasks are more likely to be uncovered [25][26][27][28][29][30][31]. Several previous works on 45 this front focused on enforcing similarity across models by penalizing differences on 46 strength and direction of regulatory interactions using a fusion penalty [25,27,28].…”
mentioning
confidence: 99%