2017
DOI: 10.48550/arxiv.1701.05265
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Online Structure Learning for Sum-Product Networks with Gaussian Leaves

Abstract: Sum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which inference is always tractable. Those properties follow from some conditions (i.e., completeness and decomposability) that must be respected by the structure of the network. As a result, it is not easy to specify a valid sum-product network by hand and therefore structure learning techniques a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…While it is possible to specify SPNs by hand, weight learning is additionally required to obtain a probability distribution, but also the specification of SPNs has to obey conditions of completeness and decomposability, all of which makes structure learning an obvious choice. Since SPNs were introduced, a number of structure learning frameworks have been developed for those and related data structures, e.g., [20,24,33].…”
Section: Related Workmentioning
confidence: 99%
“…While it is possible to specify SPNs by hand, weight learning is additionally required to obtain a probability distribution, but also the specification of SPNs has to obey conditions of completeness and decomposability, all of which makes structure learning an obvious choice. Since SPNs were introduced, a number of structure learning frameworks have been developed for those and related data structures, e.g., [20,24,33].…”
Section: Related Workmentioning
confidence: 99%
“…SPNs model joint or conditional distributions and can be learned generatively [15] or discriminatively [16] using Expectation Maximization (EM) or gradient descent (GD). Additionally, several algorithms were proposed for simultaneous learning of network parameters and structure [21][22] [23]. In this work, we use a simple structure learning technique [12] which begins by initializing the SPN with a random dense structure that is later pruned.…”
Section: A Sum-product Networkmentioning
confidence: 99%
“…Parameters of an SPN can be learned generatively (Poon and Domingos 2011) or discriminatively (Gens and Domingos 2012) using Expectation Maximization (EM) or gradient descent. Additionally, several algorithms were proposed for simultaneous learning of network parameters and structure (Hsu, Kalra, and Poupart 2017;Gens and Domingos 2013). In this work, we use a simple structure learning technique to build template SPNs.…”
Section: Sum-product Networkmentioning
confidence: 99%