2021
DOI: 10.48550/arxiv.2107.00730
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Normalizing Flow based Hidden Markov Models for Classification of Speech Phones with Explainability

Anubhab Ghosh,
Antoine Honoré,
Dong Liu
et al.

Abstract: In pursuit of explainability, we develop generative models for sequential data. The proposed models provide state-of-the-art classification results and robust performance for speech phone classification. We combine modern neural networks (normalizing flows) and traditional generative models (hidden Markov models -HMMs). Normalizing flow-based mixture models (NMMs) are used to model the conditional probability distribution given the hidden state in the HMMs. Model parameters are learned through judicious combin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…Future work includes stronger network architectures, e.g., based on transformers [46], and/or a separate post-net like in [30]. It also seems compelling to combine neural HMMs with powerful distribution families such as normalising flows, either replacing the Gaussian assumption (as done for non-neural HMMs in [47]) or as a probabilistic post-net like in [22]. This might allow the naturalness of sampled speech to surpass that of deterministic output generation.…”
Section: Discussionmentioning
confidence: 99%
“…Future work includes stronger network architectures, e.g., based on transformers [46], and/or a separate post-net like in [30]. It also seems compelling to combine neural HMMs with powerful distribution families such as normalising flows, either replacing the Gaussian assumption (as done for non-neural HMMs in [47]) or as a probabilistic post-net like in [22]. This might allow the naturalness of sampled speech to surpass that of deterministic output generation.…”
Section: Discussionmentioning
confidence: 99%