Sign language recognition (SLR) is considered a multidisciplinary research area engulfing image processing, pattern recognition and artificial intelligence. The major hurdle for a SLR is the occlusions of one hand on another. This results in poor segmentations and hence the feature vector generated result in erroneous classifications of signs resulting in deprived recognition rate. To overcome this difficulty we propose in this paper a 4 camera model for recognizing gestures of Indian sign language. Segmentation for hand extraction, shape feature extraction with elliptical Fourier descriptors and pattern classification using artificial neural networks with backpropagation training algorithm. The classification rate is computed and which provides experimental evidence that 4 camera model outperforms single camera model.
Extracting and recognizing complex human movements from unconstrained online/offline video sequence is a challenging task in computer vision. This paper proposes the classification of Indian classical dance actions using a powerful artificial intelligence tool: convolutional neural networks (CNN). In this work, human action recognition on Indian classical dance videos is performed on recordings from both offline (controlled recording) and online (live performances, YouTube) data. The offline data is created with ten different subjects performing 200 familiar dance mudras/poses from different Indian classical dance forms under various background environments. The online dance data is collected from YouTube for ten different subjects. Each dance pose is occupied for 60 frames or images in a video in both the cases. CNN training is performed with 8 different sample sizes, each consisting of multiple sets of subjects. The remaining 2 samples are used for testing the trained CNN. Different CNN architectures were designed and tested with our data to obtain a better accuracy in recognition. We achieved a 93.33% recognition rate compared to other classifier models reported on the same dataset.
Machine learning is penetrating most of the classification and recognition tasks performed by a computer. This paper proposes the classification of flower images using a powerful artificial intelligence tool, convolutional neural networks (CNN). A flower image database with 9500 images is considered for the experimentation. The entire database is sub categorized into 4. The CNN training is initiated in five batches and the testing is carried out on all the for datasets. Different CNN architectures were designed and tested with our flower image data to obtain better accuracy in recognition. Various pooling schemes were implemented to improve the classification rates. We achieved 97.78% recognition rate compared to other classifier models reported on the same dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.