2009
DOI: 10.1007/s10994-009-5130-x
|View full text |Cite
|
Sign up to set email alerts
|

Learning multi-linear representations of distributions for efficient inference

Abstract: We examine the class of multi-linear representations (MLR) for expressing probability distributions over discrete variables. Recently, MLR have been considered as intermediate representations that facilitate inference in distributions represented as graphical models.We show that MLR is an expressive representation of discrete distributions and can be used to concisely represent classes of distributions which have exponential size in other commonly used representations, while supporting probabilistic inference … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…More interestingly, as the previous example shows, SPNs can compactly represent some classes of distributions in which no conditional independences hold. Multi-linear representations (MLRs) also have this property [24]. Since MLRs are essentially expanded SPNs, an SPN can be exponentially more compact than the corresponding MLR.…”
Section: Sum-product Network and Other Modelsmentioning
confidence: 99%
“…More interestingly, as the previous example shows, SPNs can compactly represent some classes of distributions in which no conditional independences hold. Multi-linear representations (MLRs) also have this property [24]. Since MLRs are essentially expanded SPNs, an SPN can be exponentially more compact than the corresponding MLR.…”
Section: Sum-product Network and Other Modelsmentioning
confidence: 99%
“…In "Learning multi-linear representations of distributions for efficient inference" (Roth and Samdani 2009) present a method of speeding up probabilistic inference by representing discrete probability distributions using multilinear forms.…”
Section: Papers Appearing In the Journal Of Machine Learningmentioning
confidence: 99%
“…A few additional milestones have contributed to broadening the applications of tractable circuits and their expanded role today. First is the learning of tractable arithmetic circuits from data, starting with [71]; see also [91]. Second is handcrafting the structure of these circuits, which started with [88].…”
mentioning
confidence: 99%