2020
DOI: 10.48550/arxiv.2010.09386
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Exponential Family Graphical Models with Latent Variables using Regularized Conditional Likelihood

Abstract: Fitting a graphical model to a collection of random variables given sample observations is a challenging task if the observed variables are influenced by latent variables, which can induce significant confounding statistical dependencies among the observed variables. We present a new convex relaxation framework based on regularized conditional likelihood for latent-variable graphical modeling in which the conditional distribution of the observed variables conditioned on the latent variables is given by an expo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…(3) Methods in the last group are based on structural models. For example, (Yin and Li, 2011; Danaher et al, 2014; Taeb et al, 2020) assumed the raw counts or log-transformed gene expression data followed Gaussian(-like) distributions and adopted Gaussian Graphical Model (GMM) (Friedman et al, 2008) to estimate the partial correlation between genes. GENIE3 (Huynh-Thu et al, 2010) decomposed the task into several regression subproblems and solved them with tree-based models.…”
Section: Introductionmentioning
confidence: 99%
“…(3) Methods in the last group are based on structural models. For example, (Yin and Li, 2011; Danaher et al, 2014; Taeb et al, 2020) assumed the raw counts or log-transformed gene expression data followed Gaussian(-like) distributions and adopted Gaussian Graphical Model (GMM) (Friedman et al, 2008) to estimate the partial correlation between genes. GENIE3 (Huynh-Thu et al, 2010) decomposed the task into several regression subproblems and solved them with tree-based models.…”
Section: Introductionmentioning
confidence: 99%