2017
DOI: 10.1214/17-aoas1076
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian inference for multiple Gaussian graphical models with application to metabolic association networks

Abstract: We investigate the effect of cadmium (a toxic environmental pollutant) on the correlation structure of a number of urinary metabolites using Gaussian graphical models (GGMs). The inferred metabolic associations can provide important information on the physiological state of a metabolic system and insights on complex metabolic relationships. Using the fitted GGMs, we construct differential networks, which highlight significant changes in metabolite interactions under different experimental conditions. The analy… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 32 publications
(31 citation statements)
references
References 45 publications
0
31
0
Order By: Relevance
“…Our experiments also indicate that larger sample sizes might be required for larger networks. While we have recommended resampling based on the importance weights when the posterior mean has deviated significantly from the initial estimate, it is also possible to consider other approaches, such as generating a new set of samples from the likelihood at the current estimate of the posterior mean or introducing some MCMC steps to restore heterogeneity (Tan et al, 2017). Here L denotes the p × p elimination matrix (Magnus and Neudecker, 1980), which has the following properties, (i) Lvec(A) = vech(A) for any square matrix A and (ii) L T vech(A) = vec(A) if A is a lower triangular matrix of order p.…”
Section: Resultsmentioning
confidence: 99%
“…Our experiments also indicate that larger sample sizes might be required for larger networks. While we have recommended resampling based on the importance weights when the posterior mean has deviated significantly from the initial estimate, it is also possible to consider other approaches, such as generating a new set of samples from the likelihood at the current estimate of the posterior mean or introducing some MCMC steps to restore heterogeneity (Tan et al, 2017). Here L denotes the p × p elimination matrix (Magnus and Neudecker, 1980), which has the following properties, (i) Lvec(A) = vech(A) for any square matrix A and (ii) L T vech(A) = vec(A) if A is a lower triangular matrix of order p.…”
Section: Resultsmentioning
confidence: 99%
“…In the context of multiple undirected graphs, Tan et al (2017) consider a model based on a multiplicative prior on graph structures (Chung-Lu random graph) that links the probability of edge inclusion through logistic regression. Williams et al (2019) propose a model for multiple graph aimed at the detection of network differences.…”
Section: Discussion Of Alternative Approachesmentioning
confidence: 99%
“…Usually these algorithms do not scale as well as optimization approaches based on penalized likelihood; the maximum graph size that can be analyzed depends on many factors, including type of graph, statistical model and the specific dataset on hand. In the context of multiple graphs models, alternative computational strategies have been developed and relevant instances include the EM algorithm proposed by Li et al (2020), that results in a point estimate of the graphs and can scale better to lager dimensions, and a sequential Monte Carlo (SMC) algorithm proposed by Tan et al (2017), that has similar computational performances than its MCMC counterparts.…”
Section: Discussionmentioning
confidence: 99%
“…In this way information is shared among sample groups only when appropriate. Tan et al 10 consider metabolic association networks. Their prior on graph structures is an extension of the multiplicative (or Chung‐Lu random graph) model to multiple Gaussian graphical models, linking the probability of edge inclusion through logistic regression.…”
Section: Introductionmentioning
confidence: 99%