Pattern recognition of electromyography (EMG) signals can potentially improve the performance of myoelectric control for upper limb prostheses with respect to current clinical approaches based on direct control. However, the choice of features for classification is challenging and impacts long-term performance. Here, we propose the use of EMG raw signals as direct inputs to deep networks with intrinsic feature extraction capabilities recorded over multiple days. Seven able-bodied subjects performed six active motions (plus rest), and EMG signals were recorded for 15 consecutive days with two sessions per day using the MYO armband (MYB, a wearable EMG sensor). The classification was performed by a convolutional neural network (CNN) with raw bipolar EMG samples as the inputs, and the performance was compared with linear discriminant analysis (LDA) and stacked sparse autoencoders with features (SSAE-f) and raw samples (SSAE-r) as inputs. CNN outperformed (lower classification error) both LDA and SSAE-r in the within-session, between sessions on same day, between the pair of days, and leave-out one-day evaluation (p < 0.001) analyses. However, no significant difference was found between CNN and SSAE-f. These results demonstrated that CNN significantly improved performance and increased robustness over time compared with standard LDA with associated handcrafted features. This data-driven features extraction approach may overcome the problem of the feature calibration and selection in myoelectric control.
Electromyography (EMG) is a measure of electrical activity generated by the contraction of muscles. Non-invasive surface EMG (sEMG)-based pattern recognition methods have shown the potential for upper limb prosthesis control. However, it is still insufficient for natural control. Recent advancements in deep learning have shown tremendous progress in biosignal processing. Multiple architectures have been proposed yielding high accuracies (>95%) for offline analysis, yet the delay caused due to optimization of the system remains a challenge for its real-time application. From this arises a need for optimized deep learning architecture based on fine-tuned hyper-parameters. Although the chance of achieving convergence is random, however, it is important to observe that the performance gain made is significant enough to justify extra computation. In this study, the convolutional neural network (CNN) was implemented to decode hand gestures from the sEMG data recorded from 18 subjects to investigate the effect of hyper-parameters on each hand gesture. Results showed that the learning rate set to either 0.0001 or 0.001 with 80-100 epochs significantly outperformed (p < 0.05) other considerations. In addition, it was observed that regardless of network configuration some motions (close hand, flex hand, extend the hand and fine grip) performed better (83.7% ± 13.5%, 71.2% ± 20.2%, 82.6% ± 13.9% and 74.6% ± 15%, respectively) throughout the course of study. So, a robust and stable myoelectric control can be designed on the basis of the best performing hand motions. With improved recognition and uniform gain in performance, the deep learning-based approach has the potential to be a more robust alternative to traditional machine learning algorithms.
Advances in myoelectric interfaces have increased the use of wearable prosthetics including robotic arms. Although promising results have been achieved with pattern recognition-based control schemes, control robustness requires improvement to increase user acceptance of prosthetic hands. The aim of this study was to quantify the performance of stacked sparse autoencoders (SSAE), an emerging deep learning technique used to improve myoelectric control and to compare multiday surface electromyography (sEMG) and intramuscular (iEMG) recordings. Ten able-bodied and six amputee subjects with average ages of 24.5 and 34.5 years, respectively, were evaluated using offline classification error as the performance matric. Surface and intramuscular EMG were concurrently recorded while each subject performed 11 hand motions. Performance of SSAE was compared with that of linear discriminant analysis (LDA) classifier. Within-day analysis showed that SSAE (1.38 ± 1.38%) outperformed LDA (8.09 ± 4.53%) using both the sEMG and iEMG data from both able-bodied and amputee subjects (p < 0.001). In the between-day analysis, SSAE outperformed LDA (7.19 ± 9.55% vs. 22.25 ± 11.09%) using both sEMG and iEMG data from both able-bodied and amputee subjects. No significant difference in performance was observed for within-day and pairs of days with eight-fold validation when using iEMG and sEMG with SSAE, whereas sEMG outperformed iEMG (p < 0.001) in between-day analysis both with two-fold and seven-fold validation schemes. The results obtained in this study imply that SSAE can significantly improve the performance of pattern recognition-based myoelectric control scheme and has the strength to extract deep information hidden in the EMG data.
Clinical treatment of skin lesion is primarily dependent on timely detection and delimitation of lesion boundaries for accurate cancerous region localization. Prevalence of skin cancer is on the higher side, especially that of melanoma, which is aggressive in nature due to its high metastasis rate. Therefore, timely diagnosis is critical for its treatment before the onset of malignancy. To address this problem, medical imaging is used for the analysis and segmentation of lesion boundaries from dermoscopic images. Various methods have been used, ranging from visual inspection to the textural analysis of the images. However, accuracy of these methods is low for proper clinical treatment because of the sensitivity involved in surgical procedures or drug application. This presents an opportunity to develop an automated model with good accuracy so that it may be used in a clinical setting. This paper proposes an automated method for segmenting lesion boundaries that combines two architectures, the U-Net and the ResNet, collectively called Res-Unet. Moreover, we also used image inpainting for hair removal, which improved the segmentation results significantly. We trained our model on the ISIC 2017 dataset and validated it on the ISIC 2017 test set as well as the PH2 dataset. Our proposed model attained a Jaccard Index of 0.772 on the ISIC 2017 test set and 0.854 on the PH2 dataset, which are comparable results to the current available state-of-the-art techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.