2015
DOI: 10.1007/978-3-319-23525-7_21
|View full text |Cite
|
Sign up to set email alerts
|

Simplifying, Regularizing and Strengthening Sum-Product Network Structure Learning

Abstract: The need for feasible inference in Probabilistic Graphical Models (PGMs) has lead to tractable models like Sum-Product Networks (SPNs). Their highly expressive power and their ability to provide exact and tractable inference make them very attractive for several real world applications, from computer vision to NLP. Recently, great attention around SPNs has focused on structure learning, leading to different algorithms being able to learn both the network and its parameters from data. Here, we enhance one of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
75
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 62 publications
(76 citation statements)
references
References 13 publications
1
75
0
Order By: Relevance
“…To randomly create a ground truth generative model, i.e., an SPN-denoted as S -we simulate a stochastic guillotine partitioning of a fictitious N × D data matrix. Specifically, we follow a LearnSPN-like [14,34] structure learning scheme in which columns and rows of this matrix are clustered together, in this case in a random fashion.…”
Section: Synthetic Data Experimentsmentioning
confidence: 99%
“…To randomly create a ground truth generative model, i.e., an SPN-denoted as S -we simulate a stochastic guillotine partitioning of a fictitious N × D data matrix. Specifically, we follow a LearnSPN-like [14,34] structure learning scheme in which columns and rows of this matrix are clustered together, in this case in a random fashion.…”
Section: Synthetic Data Experimentsmentioning
confidence: 99%
“…To articulate with tensor-based reformulation of the SPN, we will need to slightly modify the induced tree (Definition 3) by terminating at the leaf nodes, i.e., the bottom nodes of the In fact, prevailing SPN learning algorithms or alike, e.g, LearnSPN, SPN-B and SPN-BT [11], [2], all produce SPN trees terminating at leaf nodes. Although SPN illustrations often utilize networks with shared weights (e.g., the two top branches in Fig.…”
Section: A Spn Basicsmentioning
confidence: 99%
“…denote scalars. A set of d tensors, like that of a tensor train (TT), is denoted as A (1) , A (2) , . .…”
Section: B Tensor Basicsmentioning
confidence: 99%
See 2 more Smart Citations