This article analyzes the most famous sign languages, the correlation of sign languages, and also considers the development of a verbal robot hand gesture recognition system in relation to the Kazakh language. The proposed system contains a touch sensor, in which the contact of the electrical property of the user's skin is measured, which provides more accurate information for simulating and indicating the gestures of the robot hand. Within the framework of the system, the speed and accuracy of recognition of each gesture of the verbal robot are calculated. The average recognition accuracy was over 98%. The detection time was 3ms on a 1.9 GHz Jetson Nano processor, which is enough to create a robot showing natural language gestures. A complete fingerprint of the Kazakh sign language for a verbal robot is also proposed. To improve the quality of gesture recognition, a machine learning method was used. The operability of the developed technique for recognizing gestures by a verbal robot was tested, and on the basis of computational experiments, the effectiveness of algorithms and software for responding to a verbal robot to a voice command was evaluated based on automatic recognition of a multilingual human voice. Thus, we can assume that the authors have proposed an intelligent verbal complex implemented in Python with the CMUSphinx communication module and the PyOpenGL graphical command execution simulator. Robot manipulation module based on 3D modeling from ABB.
Medicine is one of the rich source of data, generating and storing massive data, begin from description of clinical symptoms and end by dierent types of biochemical data and images from devices. Manual search and detecting biomedical patterns is complicate task from massive data. Data mining can improve the process of detecting patterns. Stomach disorders are most common disorders that aect over 60% of human population. In this work, the classication performance of four non linear supervised learning algorithms i.e. Logit, K-Nearest Neighbour, XGBoost and LightGBM for ve types of stomach disorders are compared and discussed. The objectives of this research is to nd trends of using or improvements of machine learning algorithms for detecting symptoms of stomach disorders, to research problems of using machine learning algorithms for detecting stomach disorders. Bayesian optimization is considered to nd optimal hyper parameters in the algorithms, which is faster than the grid search method. Results of the research shows algorithms that base on gradient boosting technique (XGBoost and LightGBM) get better accuracy more 95% on test dataset. For diagnostic and conrmation of diseases need to improve accuracy, in the article we propose to use optimization methods for accuracy improvement with using machine learning algorithms. Keywords: Stomach disorder • machine learning algorithm • decision support system • Bayesian optimization.
For people with disabilities, sign language is the most important means of communication. Therefore, more and more authors of various papers and scientists around the world are proposing solutions to use intelligent hand gesture recognition systems. Such a system is aimed not only for those who wish to understand a sign language, but also speak using gesture recognition software. In this paper, a new benchmark dataset for Kazakh fingerspelling, able to train deep neural networks, is introduced. The dataset contains more than 10122 gesture samples for 42 alphabets. The alphabet has its own peculiarities as some characters are shown in motion, which may influence sign recognition. Research and analysis of convolutional neural networks, comparison, testing, results and analysis of LeNet, AlexNet, ResNet and EffectiveNet – EfficientNetB7 methods are described in the paper. EffectiveNet architecture is state-of-the-art (SOTA) and is supposed to be a new one compared to other architectures under consideration. On this dataset, we showed that the LeNet and EffectiveNet networks outperform other competing algorithms. Moreover, EffectiveNet can achieve state-of-the-art performance on nother hand gesture datasets. The architecture and operation principle of these algorithms reflect the effectiveness of their application in sign language recognition. The evaluation of the CNN model score is conducted by using the accuracy and penalty matrix. During training epochs, LeNet and EffectiveNet showed better results: accuracy and loss function had similar and close trends. The results of EffectiveNet were explained by the tools of the SHapley Additive exPlanations (SHAP) framework. SHAP explored the model to detect complex relationships between features in the images. Focusing on the SHAP tool may help to further improve the accuracy of the model
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.