2016
DOI: 10.1007/978-3-319-45510-5_52
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Method for Vocabulary Addition to WFST Graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…The general mechanism of vocabulary expansion with WFST always involves some modifications of the transducers with symbol table modification. Typically the phoneme set and acoustic modelling stays unchanged and so we only need to modify L and G. Modifying G consists of expanding classbased model using Replace operation [17,19,20,21]. Modifying L might look trivial from the first glance but is actually not a simple problem if we want to keep it minimal [18,27].…”
Section: Wfst-based Vocabulary Expansion Techniquesmentioning
confidence: 99%
See 2 more Smart Citations
“…The general mechanism of vocabulary expansion with WFST always involves some modifications of the transducers with symbol table modification. Typically the phoneme set and acoustic modelling stays unchanged and so we only need to modify L and G. Modifying G consists of expanding classbased model using Replace operation [17,19,20,21]. Modifying L might look trivial from the first glance but is actually not a simple problem if we want to keep it minimal [18,27].…”
Section: Wfst-based Vocabulary Expansion Techniquesmentioning
confidence: 99%
“…In [18,19] authors suggested a methods that separated newly added words on groups depending on their boundary phonemes while constructing initial CLG/HCLG with several contextdependent auxiliary symbols that are to be replaced with new words. In [17] a simpler approach was proposed to deal with context dependency:…”
Section: Wfst-based Vocabulary Expansion Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…In these systems both the language model and lexicon are fixed and encoded as a WFST, this means words that were not part of these systems at training time are impossible to recognize. This has lead to various approaches to modify the WFSTs so that the ASR system can recognize words it had previously no knowledge of [2,3,4,5,6,7]. A complication is that typically the lexicon and language model WFSTs will be composed together to create a static decoding graph that can be used repeatedly during decoding.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, one can try and modify the static decoding graph (HCLG) [5,6,7]. Because of composition and optimization (e.g.…”
Section: Introductionmentioning
confidence: 99%