“…Results show the robustness of the skin classification against various illumination conditions. After detecting skin components, a connected component labeling algorithm [7] is used where subsets of connected image components are uniquely labeled. An algorithm scans the image, labeling the underlying pixels according to a predefined connectivity scheme and the relative values of their neighbors.…”
Abstract. One of the most common ways of communication in deaf community is sign language recognition. This paper focuses on the problem of recognizing Arabic sign language at word level used by the community of deaf people. The proposed system is based on the combination of Spatio-Temporal local binary pattern (STLBP) feature extraction technique and support vector machine (SVM) classifier. The system takes a sequence of sign images or a video stream as input, and localize head and hands using IHLS color space and random forest classifier. A feature vector is extracted from the segmented images using local binary pattern on three orthogonal planes (LBP-TOP) algorithm which jointly extracts the appearance and motion features of gestures. The obtained feature vector is classified using support vector machine classifier. The proposed method does not require that signers wear gloves or any other marker devices. Experimental results using Arabic sign language (ArSL) database contains 23 signs (words) recorded by 3 signers show the effectiveness of the proposed method. For signer dependent test, the proposed system based on LBP-TOP and SVM achieves an overall recognition rate reaching up to 99.5%.
“…Results show the robustness of the skin classification against various illumination conditions. After detecting skin components, a connected component labeling algorithm [7] is used where subsets of connected image components are uniquely labeled. An algorithm scans the image, labeling the underlying pixels according to a predefined connectivity scheme and the relative values of their neighbors.…”
Abstract. One of the most common ways of communication in deaf community is sign language recognition. This paper focuses on the problem of recognizing Arabic sign language at word level used by the community of deaf people. The proposed system is based on the combination of Spatio-Temporal local binary pattern (STLBP) feature extraction technique and support vector machine (SVM) classifier. The system takes a sequence of sign images or a video stream as input, and localize head and hands using IHLS color space and random forest classifier. A feature vector is extracted from the segmented images using local binary pattern on three orthogonal planes (LBP-TOP) algorithm which jointly extracts the appearance and motion features of gestures. The obtained feature vector is classified using support vector machine classifier. The proposed method does not require that signers wear gloves or any other marker devices. Experimental results using Arabic sign language (ArSL) database contains 23 signs (words) recorded by 3 signers show the effectiveness of the proposed method. For signer dependent test, the proposed system based on LBP-TOP and SVM achieves an overall recognition rate reaching up to 99.5%.
“…When coming to the next paper on Colour Based Hand and Finger Detection Technology for User Interaction, Kang et al (2008) have discussed about detecting human hand based on contours. Contours are close curves on the frame and are of several levels.…”
Section: Available Trends In the Same Area Of Researchmentioning
Ubiquitous Computing is the idea of embedding computation onto everyday objects of the environment we live in. It's the approach towards making technology and computation available everywhere thereby making people interact with abundant information around them in a more natural and friendly manner. Our project, the palm display system is one such approach of using one's palm as the graphical user interface enabling the user to interact with it using his/her fingertip. We have implemented this project using computer vision techniques. In this study, we describe the details of the palm display system and explain several issues regarding the challenges in implementation and show the result using a sample photo viewer application. The main idea behind the project is to detect the interaction using shadow based techniques without the use of a fingertip marker.
“…This is basically a process of dimension reduction or feature reduction as this process eliminates the irrelevant data present in the given input while maintaining important information. Several feature extraction techniques [5][6][7][8][9][10][11][12][13][14] are there for gesture recognition but in this paper MFCC have been used for feature extraction which is mainly used for speech recognition system. The purpose for using MFCC for image processing is to enhance the effectiveness of MFCC in the field of image processing as well.…”
Section: Recognition System For Hand Gesturementioning
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.