2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8851851
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models

Abstract: Bottom-Up Hidden Tree Markov Model is a highly expressive model for tree-structured data. Unfortunately, it cannot be used in practice due to the intractable size of its state-transition matrix. We propose a new approximation which lies on the Tucker factorisation of tensors. The probabilistic interpretation of such approximation allows us to define a new probabilistic model for tree-structured data. Hence, we define the new approximated model and we derive its learning algorithm. Then, we empirically assess t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
3

Relationship

3
0

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…Therefore, we obtain the following equation: (7) where each row in the equation, except the last one, contains the application of a linear map; the last row contains a vector. Finally, Eq.…”
Section: A Weighted Sum Approximationmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, we obtain the following equation: (7) where each row in the equation, except the last one, contains the application of a linear map; the last row contains a vector. Finally, Eq.…”
Section: A Weighted Sum Approximationmentioning
confidence: 99%
“…On the other hand, neural models tend to be less studied from this perspective. In particular, bottom-up neural models seem immune to the exponential growth of the parameters space with respect to the tree output degree which instead is considered a limiting factor in bottomup generative models where approximations are required [2], [3], [7].…”
Section: Introductionmentioning
confidence: 99%
“…Nonetheless, for computational tractability issues, the same works approximated the tensor with a simple probabilistic factorization largely equivalent to a weighted sum in neural models. Only recently, [4] has introduced a proper tensor decomposition of the n-way probabilistic tensor leveraging a Bayesian Tucker decomposition.…”
Section: Neural Model Compressionmentioning
confidence: 99%
“…Apart from their direct application to multi-way input data analysis, tensors are widely adopted as a fundamental building block for machine learning models. Firstly, they have found application in a variety of machine learning paradigms, ranging from neural networks [2,3] to probabilistic models [4], to enable the efficient compression of the model parameters leveraging tensor decomposition methods. Secondly, they provide a means to extend existing vectorial machine learning models to capture richer data representations, where tensor decompositions provide the necessary theoretical and methodological backbone to study, characterize and control the expressiveness of the model [5].…”
Section: Introductionmentioning
confidence: 99%
“…the context). The choice of a simple aggregation function can lead to sub-optimal results; to this end, more complex functions based on Tucker tensor decomposition [3] have been used successfully both in probabilistic [4] and neural [5] models for tree-structured data.…”
Section: Introductionmentioning
confidence: 99%