It is difficult for most of us to imagine, but many who are deaf-mute rely on sign language as their primary means of communication. They, in essence, hear and talk through their hands. Sign languages are visual languages. They are natural languages which are used by many deaf-mute people all over the world. In sign language the hands convey most of the information. Hence, vision-based automatic sign language recognition systems have to extract relevant hand features from real life image sequences to allow correct and stable gesture classification. In our proposed system, we intend to recognize some very basic elements of sign language and to translate them to text. Firstly, the video shall be captured frame-by-frame, the captured video will be processed and the appropriate image will be extracted, this retrieved image will be further processed using BLOB analysis and will be sent to the statistical database; here the captured image shall compared with the one saved in the database and the matched image will be used to determine the performed alphabet sign in the language. Here, we will be implementing only American Sign Language Finger-spellings, and we will construct words and sentences with them.
General TermsSign language translation, Gesture recognition system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.