2017 IEEE Symposium Series on Computational Intelligence (SSCI) 2017
DOI: 10.1109/ssci.2017.8285244
|View full text |Cite
|
Sign up to set email alerts
|

Hidden tree Markov networks: Deep and wide learning for structured data

Abstract: The paper introduces the Hidden Tree Markov Network (HTN), a neuro-probabilistic hybrid fusing the representation power of generative models for trees with the incremental and discriminative learning capabilities of neural networks. We put forward a modular architecture in which multiple generative models of limited complexity are trained to learn structural feature detectors whose outputs are then combined and integrated by neural layers at a later stage. In this respect, the model is both deep, thanks to the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…Fact is the creation of the hidden tree Markov network (HTN) -a neural hybrid architecture which combines the power of generative models for trees with the incremental and discriminative learning capabilities of the neural network. In this way a greater depth of unfolding of the generative models on the input structures is guaranteed [8].…”
Section: Synthesizing Of Models For Identification Of Teletraffic Marmentioning
confidence: 99%
“…Fact is the creation of the hidden tree Markov network (HTN) -a neural hybrid architecture which combines the power of generative models for trees with the incremental and discriminative learning capabilities of the neural network. In this way a greater depth of unfolding of the generative models on the input structures is guaranteed [8].…”
Section: Synthesizing Of Models For Identification Of Teletraffic Marmentioning
confidence: 99%
“…There it has been formalized the idea of extending recurrent models to perform a bottom-up processing of the tree, unfolding the network recursively over the tree structure so that the hidden state of the current tree node depends from that of its children. The same approach has been taken by a number of models from different paradigms, such as the probabilistic bottom-up Hidden Markov Tree model [4], the neural Tree Echo State Network [14] or the neural-probabilistic hybrid in Hidden Tree Markov Networks (HTNs) [3]. Another approach can be taken based on inverting the direction of parsing of the tree, by processing this top-down from the root to its leaves.…”
Section: Introductionmentioning
confidence: 99%
“…An alternative approach is that put forward in the Tree Echo State Network (TreeESN) [10] where the recursive neurons are randomly initialized according to some dynamic system stability criterion and their weights are not adjusted by the training procedure. Recently, the Hidden Tree Markov Networks (HTNs) [11] have been proposed as an hybrid approach integrating probabilistic bottomup models within a neural architecture and learning scheme.…”
Section: Introductionmentioning
confidence: 99%