Electromyography (EMG) signals have been used in designing muscle-machine interfaces (MuMIs) for various applications, ranging from entertainment (EMG controlled games) to human assistance and human augmentation (EMG controlled prostheses and exoskeletons). For this, classical machine learning methods such as Random Forest (RF) models have been used to decode EMG signals. However, these methods depend on several stages of signal pre-processing and extraction of handcrafted features so as to obtain the desired output. In this work, we propose EMG based frameworks for the decoding of object motions in the execution of dexterous, inhand manipulation tasks using raw EMG signals input and two novel deep learning (DL) techniques called Temporal Multi-Channel Transformers and Vision Transformers. The results obtained are compared, in terms of accuracy and speed of decoding the motion, with RF-based models and Convolutional Neural Networks as a benchmark. The models are trained for 11 subjects in a motion-object specific and motion-object generic way, using the 10-fold crossvalidation procedure. This study shows that the performance of MuMIs can be improved by employing DL-based models with raw myoelectric activations instead of developing DL or classic machine learning models with handcrafted features.
Electromyography (EMG) signals are commonly used for the development of Muscle Machine Interfaces. EMG-based solutions provide intuitive and often hand-free control in a wide range of applications that range from the decoding of human intention in classification tasks to the continuous decoding of human motion employing regression models. In this work, we compare various machine learning and feature extraction methods for the creation of EMG based control frameworks for dexterous robotic telemanipulation. Various models are needed that can decode dexterous, in-hand manipulation motions and perform hand gesture classification in real-time. Three different machine learning methods and eight different time-domain features were evaluated and compared. The performance of the models was evaluated in terms of accuracy and time required to predict a data sample. The model that presented the best performance and prediction time trade-off was used for executing in real-time a telemanipulation task with the New Dexterity Autonomous Robotic Assistance (ARoA) platform (a humanoid robot). Various experiments have been conducted to experimentally validate the efficiency of the proposed methods. The robotic system is shown to successfully complete a series of tasks autonomously as well as to efficiently execute tasks in a shared control manner.
Conventional muscle-machine interfaces like Electromyography (EMG), have significant drawbacks, such as crosstalk, a non-linear relationship between the signal and the corresponding motion, and increased signal processing requirements. In this work, we introduce a new muscle-machine interfacing technique called lightmyography (LMG), that can be used to efficiently decode human hand gestures, motion, and forces from the detected contractions of the human muscles. LMG utilizes light propagation through elastic media and human tissue, measuring changes in light luminosity to detect muscle movement. Similar to forcemyography, LMG infers muscular contractions through tissue deformation and skin displacements. In this study, we look at how different characteristics of the light source and silicone medium affect the performance of LMG and we compare LMG and EMG based gesture decoding using various machine learning techniques. To do that, we design an armband equipped with five LMG modules, and we use it to collect the required LMG data. Three different machine learning methods are employed: Random Forests, Convolutional Neural Networks, and Temporal Multi-Channel Vision Transformers. The system has also been efficiently used in decoding the forces exerted during power grasping. The results demonstrate that LMG outperforms EMG for most methods and subjects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.