In recent years, deep learning algorithms have become increasingly more prominent for their unparalleled ability to automatically learn discriminant features from large amounts of data. However, within the field of electromyographybased gesture recognition, deep learning algorithms are seldom employed as they require an unreasonable amount of effort from a single person, to generate tens of thousands of examples.This work's hypothesis is that general, informative features can be learned from the large amounts of data generated by aggregating the signals of multiple users, thus reducing the recording burden while enhancing gesture recognition. Consequently, this paper proposes applying transfer learning on aggregated data from multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large datasets. Two datasets comprised of 19 and 17 able-bodied participants respectively (the first one is employed for pre-training) were recorded for this work, using the Myo Armband. A third Myo Armband dataset was taken from the NinaPro database and is comprised of 10 able-bodied participants. Three different deep learning networks employing three different modalities as input (raw EMG, Spectrograms and Continuous Wavelet Transform (CWT)) are tested on the second and third dataset. The proposed transfer learning scheme is shown to systematically and significantly enhance the performance for all three networks on the two datasets, achieving an offline accuracy of 98.31% for 7 gestures over 17 participants for the CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw EMG-based ConvNet. Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback allows users to adapt their muscle activation strategy which reduces the degradation in accuracy normally experienced over time.
Motion capture systems are recognized as the gold standard for joint angle calculation. However, studies using these systems are restricted to laboratory settings for technical reasons, which may lead to findings that are not representative of real-life context. Recently developed commercial and home-made inertial measurement sensors (M/IMU) are potentially good alternatives to the laboratory-based systems, and recent technology improvements required a synthesis of the current evidence. The aim of this systematic review was to determine the criterion validity and reliability of M/IMU for each body joint and for tasks of different levels of complexity. Five different databases were screened (Pubmed, Cinhal, Embase, Ergonomic abstract, and Compendex). Two evaluators performed independent selection, quality assessment (consensus-based standards for the selection of health measurement instruments [COSMIN] and quality appraisal tools), and data extraction. Forty-two studies were included. Reported validity varied according to task complexity (higher validity for simple tasks) and the joint evaluated (better validity for lower limb joints). More studies on reliability are needed to make stronger conclusions, as the number of studies addressing this psychometric property was limited. M/IMU should be considered as a valid tool to assess whole body range of motion, but further studies are needed to standardize technical procedures to obtain more accurate data.
In the realm of surface electromyography (sEMG) gesture recognition, deep learning algorithms are seldom employed. This is due in part to the large quantity of data required for them to train on. Consequently, it would be prohibitively time consuming for a single user to generate a sufficient amount of data for training such algorithms. In this paper, two datasets of 18 and 17 able-bodied participants respectively are recorded using a low-cost, low-sampling rate (200Hz), 8-channel, consumer-grade, dry electrode sEMG device named Myo armband (Thalmic Labs). A convolutional neural network (CNN) is augmented using transfer learning techniques to leverage inter-user data from the first dataset and alleviate the data generation burden imposed on a single individual. The results show that the proposed classifier is robust and precise enough to guide a 6DoF robotic arm (in conjunction with orientation data) with the same speed and precision as with a joystick. Furthermore, the proposed CNN achieves an average accuracy of 97.81% on seven hand/wrist gestures on the 17 participants of the second dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.