The performance of automatic speech recognition systems for children's speech is known to suffer from the large variation and mismatch in the acoustic and linguistic attributes between children's and adults' speech. One of the various identified sources of mismatch is the difference in formant frequencies between adults and children. In this paper, we propose a formant modification method to mitigate differences between adults' and children's speech and to improve the performance of ASR for children. The explored technique gives a relative 27% improvement in system performance compared to a hybrid DNN-HMM baseline. We also compare the system performance with related speaker adaptation methods like vocal tract length normalization (VTLN) and speaking rate adaptation (SRA) and find that the proposed method gives improvements over them, as well. Combining the proposed method with VTLN and SRA results in a further reduction of WER. We also found that the proposed method performs well even for noisy speech.
This paper describes AaltoASR's speech recognition system for the INTERSPEECH 2020 shared task on Automatic Speech Recognition (ASR) for non-native children's speech. The task is to recognize non-native speech from children of various age groups given a limited amount of speech. Moreover, the speech being spontaneous has false starts transcribed as partial words, which in the test transcriptions leads to unseen partial words. To cope with these two challenges, we investigate a data augmentation-based approach. Firstly, we apply the prosodybased data augmentation to supplement the audio data. Secondly, we simulate false starts by introducing partial-word noise in the language modeling corpora creating new words. Acoustic models trained on prosody-based augmented data outperform the models using the baseline recipe or the SpecAugment-based augmentation. The partial-word noise also helps to improve the baseline language model. Our ASR system, a combination of these schemes, is placed third in the evaluation period and achieves the word error rate of 18.71%. Post-evaluation period, we observe that increasing the amounts of prosody-based augmented data leads to better performance. Furthermore, removing low-confidence-score words from hypotheses can lead to further gains. These two improvements lower the ASR error rate to 17.99%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.