We present a lattice-based STD method for German broadcast news data and compare it to a previously proposed fuzzy search. Due to the important out-of-vocabulary (OOV) problem in German, we evaluate suitable subword indexing units for lattice retrieval. Hybrid lattice retrieval of words and subwords is investigated because of the robust nature of words as an indexing unit. We show that by using efficient lattice graph and score pruning techniques, precision of subword retrieval is increased by 8% absolute with only a small loss in recall. Additionally, a speed-up of up to 6 times can be observed
The design and evaluation of subword-based spoken term detection (STD) systems depends on various factors, such as language, type of the speech to be searched and application scenario. The choice of the subword unit and search approach, however, is oftentimes made regardless of these factors. Therefore, we evaluate two subword STD systems across two data sets with varying properties to investigate the influence of different subword units on STD performance when working with different data types. Results show that on German broadcast news data, constrained search in syllable lattices is effective, whereas fuzzy phone lattice search is superior in more challenging English conversational telephone speech. By combining the key features of the two systems at an early stage, we achieve improvements in Figure of Merit of up to 13.4% absolute on the German data. We also show that the choice of the appropriate evaluation metric is crucial when comparing retrieval performances across systems
We present a framework for learning a pronunciation lexicon for an Automatic Speech Recognition (ASR) system from multiple utterances of the same training words, where the lexical identities of the words are unknown. Instead of only trying to learn pronunciations for known words we go one step further and try to learn both spelling and pronunciation in a joint optimization. Decoding based on linguistically motivated hybrid subword units generates the joint lexical search space, which is reduced to the most appropriate lexical entries based on a set of simple pruning techniques. A cascade of letter and acoustic pruning, followed by re-scoring N -best hypotheses with discriminative decoder statistics resulted in optimal lexical entries in terms of both spelling and pronunciation. Evaluating the framework on English isolated word recognition, we achieve reductions of 7.7% absolute on word error rate and 20.9% absolute on character error rate over baselines that use no pruning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.