Hand gesture recognition is one of the most effective modes of interaction between humans and computers due to being highly flexible and user-friendly. A real-time hand gesture recognition system should aim to develop a user-independent interface with high recognition performance. Nowadays, convolutional neural networks (CNNs) show high recognition rates in image classification problems. Due to the unavailability of large labeled image samples in static hand gesture images, it is a challenging task to train deep CNN networks such as AlexNet, VGG-16 and ResNet from scratch. Therefore, inspired by CNN performance, an end-to-end fine-tuning method of a pre-trained CNN model with score-level fusion technique is proposed here to recognize hand gestures in a dataset with a low number of gesture images. The effectiveness of the proposed technique is evaluated using leave-one-subject-out cross-validation (LOO CV) and regular CV tests on two benchmark datasets. A real-time American sign language (ASL) recognition system is developed and tested using the proposed technique.
At present, people spend most of their time in passive rather than active mode. Sitting with computers for a long time may lead to unhealthy conditions like shoulder pain, numbness, headache, etc. To overcome this problem, human posture should be changed for particular intervals of time. This paper deals with using an inertial sensor built in the smartphone and can be used to overcome the unhealthy human sitting behaviors (HSBs) of the office worker. To monitor, six volunteers are considered within the age band of 26 ± 3 years, out of which four were male and two were female. Here, the inertial sensor is attached to the rear upper trunk of the body, and a dataset is generated for five different activities performed by the subjects while sitting in the chair in the office. Correlation-based feature selection (CFS) technique and particle swarm optimization (PSO) methods are jointly used to select feature vectors. The optimized features are fed to machine learning supervised classifiers such as naive Bayes, SVM, and KNN for recognition. Finally, the SVM classifier achieved 99.90% overall accuracy for different human sitting behaviors using an accelerometer, gyroscope, and magnetometer sensors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.