2021
DOI: 10.48550/arxiv.2101.03093
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning non-Gaussian graphical models via Hessian scores and triangular transport

Abstract: Undirected probabilistic graphical models represent the conditional dependencies, or Markov properties, of a collection of random variables. Knowing the sparsity of such a graphical model is valuable for modeling multivariate distributions and for efficiently performing inference. While the problem of learning graph structure from data has been studied extensively for certain parametric families of distributions, most existing methods fail to consistently recover the graph structure for non-Gaussian data. Here… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…In the field of learning undirected graphical models, nonparanormal distributions are often used as non-Gaussian test cases for learning algorithms: the graph does not change under the transformation (and so inherits the same graph that is prescribed for the multivariate normal vector), despite the marginal distributions being clearly non-Gaussian [9]. Our previous work showed numerically that assuming-incorrectly-that the nonparanormal data is in fact Gaussian does not significantly impair graph learning [2]. This was surprising at the time, but the current work shows how the precision matrix still encodes conditional independence structure: With respect to the undirected graph, the nonparanormal distribution behaves like a Gaussian.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In the field of learning undirected graphical models, nonparanormal distributions are often used as non-Gaussian test cases for learning algorithms: the graph does not change under the transformation (and so inherits the same graph that is prescribed for the multivariate normal vector), despite the marginal distributions being clearly non-Gaussian [9]. Our previous work showed numerically that assuming-incorrectly-that the nonparanormal data is in fact Gaussian does not significantly impair graph learning [2]. This was surprising at the time, but the current work shows how the precision matrix still encodes conditional independence structure: With respect to the undirected graph, the nonparanormal distribution behaves like a Gaussian.…”
Section: Discussionmentioning
confidence: 99%
“…In general, this correspondence-between the second moment matrix and the independence properties, and between the inverse and conditional independence properties-does not hold for non-Gaussian distributions. Tests to determine independence and conditional independence become more complex than matrix estimation: the complexity of exhaustive pairwise testing techniques scales exponentially with the number of variables [7]; other methods compute scores or combine one-dimensional conditional distributions for the exponential family [8,12,11]; another approach (by two of the current co-authors) identifies conditional independence for arbitrary non-Gaussian distributions from the Hessian of the log density, but is so far computationally limited to rather small graphs [2]. Thus, it is of broad interest to analytically extract marginal and conditional independence properties of a distribution a priori to any estimation procedure.…”
Section: Introductionmentioning
confidence: 99%
“…, z k ). The Rosenblatt tranformation can take advantage of the Markov structure that the prior may have to enforce sparsity in the map T , which enables fast evaluation of the transformation, see [4,53] for further details.…”
Section: General Priorsmentioning
confidence: 99%
“…, z k ). The Rosenblatt transformation can take advantage of the Markov structure that the prior may have to enforce sparsity in the map T, which enables fast evaluation of the transformation, see [4,54] for further details.…”
Section: General Priorsmentioning
confidence: 99%