2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 2019
DOI: 10.1109/icmla.2019.00246
|View full text |Cite
|
Sign up to set email alerts
|

Comparing the Modeling Powers of RNN and HMM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…From a theoretical point of view, we believe that the derivations that led to the neural interpretation of Gaussian mixture models could benefit other mixture models such as the von Mises-Fisher mixture models (Hasnat et al, 2017) or hidden Markov models (HMM). The case of Gaussian-HMM seems especially promising as it allows to bridge with Recurrent networks (Salaün et al, 2019). followed by a dense layer with a dimension 3 output.…”
Section: Discussionmentioning
confidence: 99%
“…From a theoretical point of view, we believe that the derivations that led to the neural interpretation of Gaussian mixture models could benefit other mixture models such as the von Mises-Fisher mixture models (Hasnat et al, 2017) or hidden Markov models (HMM). The case of Gaussian-HMM seems especially promising as it allows to bridge with Recurrent networks (Salaün et al, 2019). followed by a dense layer with a dimension 3 output.…”
Section: Discussionmentioning
confidence: 99%
“…Subsequently, the visual representations of an FFNN and RNN have no meaning other than displaying the architecture of the network. Even though DBNs, HMMs, and RNNs can be used for estimating an unobserved variable given a sequence of observations, RNNs deduce this latent variable deterministically based on the observed sequence instead of making use of, for example, a Markov chain [40].…”
Section: B Driver Intention Recognition Methodsmentioning
confidence: 99%
“…HMMs and RNNs share out similarities since they comprise latent variables. However, they differ in the way of building these variables [31]. The choice of either HMMs or RNNs depends on the application domain.…”
Section: ) Classificationmentioning
confidence: 99%