In recent years, the role of pattern recognition in systems based on human computer interaction (HCI) has spread in terms of computer vision applications and machine learning, and one of the most important of these applications is to recognize the hand gestures used in dealing with deaf people, in particular to recognize the dashed letters in surahs of the Quran. In this paper, we suggest an Arabic Alphabet Sign Language Recognition System (AArSLRS) using the vision-based approach. The proposed system consists of four stages: the stage of data processing, preprocessing of data, feature extraction, and classification. The system deals with three types of datasets: data dealing with bare hands and a dark background, data dealing with bare hands, but with a light background, and data dealing with hands wearing dark colored gloves. AArSLRS begins with obtaining an image of the alphabet gestures, then revealing the hand from the image and isolating it from the background using one of the proposed methods, after which the hand features are extracted according to the selection method used to extract them. Regarding the classification process in this system, we have used supervised learning techniques for the classification of 28-letter Arabic alphabet using 9240 images. We focused on the classification for 14 alphabetic letters that represent the first Quran surahs in the Quranic sign language (QSL). AArSLRS achieved an accuracy of 99.5% for the K-Nearest Neighbor (KNN) classifier.
Sign language continues to be the preferred tool of communication between the deaf and the hearing-impaired. It is a well-structured code by hand gesture, where every gesture has a specific meaning, In this paper has goal to develop a system for automatic translation of Arabic Sign Language. To Arabic Text (ATASAT) System this system is acts as a translator among deaf and dumb with normal people to enhance their communication, the proposed System consists of five main stages Video and Images capture, Video and images processing, Hand Signs Construction, Classification finally Text transformation and interpretation, this system depends on building a two datasets image features for Arabic sign language gestures alphabets from two resources: Arabic Sign Language dictionary and gestures from different signer's human, also using gesture recognition techniques, which allows the user to interact with the outside world. This system offers a novel technique of hand detection is proposed which detect and extract hand gestures of Arabic Sign from Image or video, in this paper we use a set of appropriate features in step hand sign construction and classification of based on different classification algorithms such as KNN, MLP, C4.5, VFI and SMO and compare these results to get better classifier.
The pattern recognition is considered of significant systems where applied computer vision techniques, and most important of these systems recognize the hand gestures which are used in the interpretation and translation of Arabic Sign Language(ArSL) for the deaf and dumb to written text software systems. Arabic Sign Language Recognition (ArSLR) is used as a popular tool of communication between the hearing disabled and the deaf people throughout the Arab world.This paper proposes an image based ArSLR system using gesture recognition techniques, which allows the user to interact with the outside world. In particular, the design and development of a system for automatic translation of Arabic sign language to text in Arabic is proposed in this paper. And that includes extraction of the stem Arabic words, distinguish and different meanings of similar words. The proposed system start with acquiring the image of alphabet letters gestures indicative Secondly to detect the hand of the image and isolate it from the background, the third stage is extraction and conclusion features and characteristics of hand sign pattern, then Classification process through one of the most effective classification algorithms to classify the form of hand gesture in the fourth step .Finally is the interpretation and translation of the letter in the alphabet indicative (sign language) contrast to his character in the alphabet of the Arabic language (Interpretation and Translation).In this paper we will provide the results of the first stage of a system image capture then offer processing preliminary results of each image taken with the aim detect the hand for each, according to in the second stage of the system, and will be using the results of this paper in the study of other stages of the system and displayed in a separate research papers after the completion of their work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.