Automatic speech recognition (ASR) has moved from science-fiction fantasy to daily reality for citizens of technological societies. Some people seek it out, preferring dictating to typing, or benefiting from voice control of aids such as wheel-chairs. Others find it embedded in their hi-tec gadgetry -in mobile phones and car navigation systems, or cropping up in what would have until recently been human roles such as telephone booking of cinema tickets. Wherever you may meet it, computer speech recognition is here, and it's here to stay.
Most of the automatic speech recognition (ASR) systems are based on hidden Markov Model in which Guassian Mixturess model is used. The output of this model depends on subphone states. Dynamic information is typically included by appending time-derivatives to feature vectors. This approach was quite successful.This approach makes the false assumption of framewise independence of the augmented feature vectors and ignores the spatial correlations in the parametrised speech signal. This is the short coming while applying HMM for acoustic modeling for ASR. Rather than modeling individual frames of data, LDMs characterize entire segments of speech. An auto-regressive state evolution through a continuous space gives a Markovian model .The underlying dynamics, and spatial correlations between feature dimensions. LDMs are well suited to modeling smoothly varying, continuous, yet noisy trajectories such as found in measured articulatory data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.