2018
DOI: 10.1007/s10994-018-5760-y
|View full text |Cite
|
Sign up to set email alerts
|

Visualizing and understanding Sum-Product Networks

Abstract: Sum-Product Networks (SPNs) are deep tractable probabilistic models by which several kinds of inference queries can be answered exactly and in a tractable time. They have been largely used as black box density estimators, assessed by comparing their likelihood scores on different tasks. In this paper we explore and exploit the inner representations learned by SPNs. By taking a closer look at the inner workings of SPNs, we aim to better understand what and how meaningful the representations they learn are, as … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 19 publications
(17 citation statements)
references
References 17 publications
0
17
0
Order By: Relevance
“…Vergari et al [90] also evaluated SPNs for representation learning [91]. Their SPNs encode a hierarchy of part-based representations which can be ordered by scope length.…”
Section: Other Applicationsmentioning
confidence: 99%
“…Vergari et al [90] also evaluated SPNs for representation learning [91]. Their SPNs encode a hierarchy of part-based representations which can be ordered by scope length.…”
Section: Other Applicationsmentioning
confidence: 99%
“…These techniques, however, have not been extended to hybrid domains. On the other hand, MSPNs (Molina et al 2018;) are state-of-the-art density estimators that extend Sum-Product Networks (Poon and Domingos 2011;Darwiche 2009;Vergari, Di Mauro, and Esposito 2019) by introducing continuous variables and polynomial densities at the leaves. Like DETs, MSPNs allow efficient inference so long as the query formula is axis-aligned, and are learned using a greedy scheme.…”
Section: Related Workmentioning
confidence: 99%
“…Please note that in such a way we are encompassing highly expressive and complex joint distributions that might have generated the data (while retaining control over their statistical dependencies, types and likelihood models. Indeed, SPNs have been demonstrated to capture linear [22] and highly non-linear correlations [23,35] in the data, being also able to model constrained random vectors, as ones drawn from the simplex (see. [23]).…”
Section: Synthetic Data Experimentsmentioning
confidence: 99%