Improvements in Serbian Speech Recognition using Sequence-Trained Deep Neural Networks.Abstract. This paper presents the recent improvements in Serbian speech recognition that were obtained by using contemporary deep neural networks based on sequence-discriminative training to train robust acoustic models. More specifically, several variants of the new large vocabulary continuous speech recognition (LVCSR) system are described, all based on the lattice-free version of the maximum mutual information (LF-MMI) training criterion. The parameters of the system were varied to achieve best possible word error rate (WER) and character error rate (CER), using the largest speech database for Serbian in existence and the best n-gram based language model made for general purposes. In addition to tuning the neural network itself (its layers, complexity, layer splicing etc.) other language-specific optimizations were explored, such as the usage of accent-specific vowel phoneme models, and its combination with pitch features to produce the best possible results. Finally, speech database tuning was tested as well. Artificial database expansion was made by modifying speech speed in utterances, as well as volume scaling in an attempt to improve speech variability.The results showed that 8-layer deep neural network with 625-neuron layers works best in the given environment, without the need for speech database augmentation or volume adjustments, and that pitch features in combination with the introduction of accented vowel models provide the best performance out of all experiments.
This paper describes the procedure of collecting speech and corresponding textual data and the processing needed to create a repository for training a LVCSR system for the Serbian language. The speech database for Serbian consists of speech recordings from audio books, radio programmes and talk shows, as well as read utterances from an array of male and female speakers. Currently, approximately 200 hours of speech recordings are collected, together with corresponding orthographic transcriptions which contain around 200 thousand words (over 3 million characters).Audio files are split in order for each of them to contain a single utterance. The corresponding transcriptions are used to create label files as well as for training the language model (LM)-namely, new transcriptions are added to the existing textual corpus earlier collected for the purpose of creating the LM. The software which was specially designed for building the speech repository for Serbian is also briefly described. Keywords-large vocabulary continuous speech recognition, Serbian, speech database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.