2016
DOI: 10.1186/s12859-016-1039-0
|View full text |Cite
|
Sign up to set email alerts
|

Learning mixed graphical models with separate sparsity parameters and stability-based model selection

Abstract: BackgroundMixed graphical models (MGMs) are graphical models learned over a combination of continuous and discrete variables. Mixed variable types are common in biomedical datasets. MGMs consist of a parameterized joint probability density, which implies a network structure over these heterogeneous variables. The network structure reveals direct associations between the variables and the joint probability density allows one to ask arbitrary probabilistic questions on the data. This information can be used for … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
79
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 65 publications
(80 citation statements)
references
References 22 publications
1
79
0
Order By: Relevance
“…Using a stability based procedure to determine these parameters could enable more accurate predictions in practice. However, these experiments were to show impact of the parameter itself [23]. More detailed accuracy results are available, as F 1 Scores for the best performing parameter setting for each algorithm for 50 node networks can be found in Online Resource 3 (1000 Samples) and Online Resource 4 (200 Samples).…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Using a stability based procedure to determine these parameters could enable more accurate predictions in practice. However, these experiments were to show impact of the parameter itself [23]. More detailed accuracy results are available, as F 1 Scores for the best performing parameter setting for each algorithm for 50 node networks can be found in Online Resource 3 (1000 Samples) and Online Resource 4 (200 Samples).…”
Section: Resultsmentioning
confidence: 99%
“…Learning this model over high dimensional datasets directly is computationally infeasible due to the computation of the partition function, so to avoid this, a proximal gradient method is used to learn a penalized negative log pseudolikelihood form of the model. This negative log pseudolikelihood is given in Equation 3, and the penalized form is presented in Equation 4 and described in [23]. For both of these, Θ refers to all parameters of the model collectively.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations