Deaf and Mute people cannot communicate efficiently to express their feelings to ordinary people. The common method these people use for communication is the sign language. But these sign languages are not very familiar to ordinary people. Therefore, effective communication between deaf and mute people and ordinary people is seriously affected. This paper presents the development of an Android mobile application to translate sign language into speech-language for ordinary people, and speech into text for deaf and mute people using Convolution Neural Network (CNN). The study focuses on vision-based Sign Language Recognition (SLR) and Automatic Speech Recognition (ASR) mobile application. The main challenging tasks were audio classification and image classification. Therefore, CNN was used to train audio clips and images. Mel-frequency Cepstral Coefficient (MFCC) approach was used for ASR. The mobile application was developed by Python programming and Android Studio. After developing the application, testing was done for letters A and C, and these letters were identified with 95% accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.