People with speech and hearing impairments can communicate through sign (gesture) languages. One of the major issues that our society is dealing with is the difficulty that people with disabilities have in sharing their feelings with normal people. People with disabilities account for roughly 15% of the global population. People who do not have a disability are either unwilling or unable to learn it. This communication gap is a major hindrance to growth and advancement of people having speech and hearing impairments as well as a challenge for our society. The main goal of this project is to break down barriers to communication between people with disabilities and the rest of society. This project aims to design a model which can recognize sign language alphabets (hand gestures) and convert it into text and sound using a machine learning approach. The performance of this method is evaluated on publicly available sign language dataset. In our project, which is based on Convolution Neural Networks, we used the Inception V3 deep learning model for image classification. Hand gestures are captured as images by the webcams in this project, and the defined model aids in the recognition of the corresponding alphabet for hand gestures. We have tried to overcome the existing limitations in Sign Language Recognition and to increase efficiency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.