We present a methodology of controlling machines using spoken language commands. The two major problems relating to the speech interfaces for machines, namely, the interpretation of words with fuzzy implications and the out-of-vocabulary (OOV) words in natural conversation, are investigated. The system proposed in this paper is designed to overcome the above two problems in controlling machines using spoken language commands. The present system consists of a hidden Markov model (HMM) based automatic speech recognizer (ASR), with a keyword spotting system to capture the machine sensitive words from the running utterances and a fuzzy-neural network (FNN) based controller to represent the words with fuzzy implications in spoken language commands. Significance of the words, i.e., the contextual meaning of the words according to the machine's current state, is introduced to the system to obtain more realistic output equivalent to users' desire. Modularity of the system is also considered to provide a generalization of the methodology for systems having heterogeneous functions without diminishing the performance of the system. The proposed system is experimentally tested by navigating a mobile robot in real time using spoken language commands.
Children with autism spectrum disorder (ASD) have altered behaviors in communication, social interaction, and activity, out of which communication has been the most prominent disorder among many. Despite the recent technological advances, limited attention has been given to screening and diagnosing ASD by identifying the speech deficiencies (SD) of autistic children at early stages. This research focuses on bridging the gap in ASD screening by developing an automated system to distinguish autistic traits through speech analysis. Data was collected from 40 participants for the initial analysis and recordings were obtained from 17 participants. We considered a three-stage processing system; first stage utilizes thresholding for silence detection and Vocal Activity Detection for vocal isolation, second stage adopts machine learning technique neural network with frequency domain representations in developing a reliant utterance classifier for the isolated vocals and stage three also adopts machine learning technique neural network in recognizing autistic traits in speech patterns of the classified utterances. The results are promising in identifying SD of autistic children with the utterance classifier having 78% accuracy and pattern recognition 72% accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.