One of the long-standing challenges in upper limb prosthetics is restoring the sensory feedback that is missing due to amputation. Two approaches have previously been presented to provide various types of sensory information to users, namely, multi-modality sensory feedback and using an array of single-modality stimulators. However, the feedback systems used in these approaches were too bulky to be embedded in prosthesis sockets. In this paper, we propose an electrocutaneous sensory feedback method that is capable of conveying two modalities simultaneously with only one electrode. The stimulation method, which we call mixed-modality stimulation, utilizes the phenomenon in which the superposition of two electric pulse trains of different frequencies is able to evoke two different modalities (i.e., pressure and tapping) at the same time. We conducted psychophysical experiments in which healthy subjects were required to recognize the intensity of pressure or the frequency of tapping from mixed-modality or two-channel stimulations. The results demonstrated that the subjects were able to discriminate the features of the two modalities in one electrode during mixed-modality stimulation and that the accuracies of successful recognitions (mean ± standard deviation) for the two feedback variables were 84.3 ± 7% for mixed-modality stimulation and 89.5 ± 6% for two-channel dual-modality stimulation, showing no statistically significant difference. Therefore, mixed-modality stimulation is an attractive method for modulating two modalities independently with only one electrode, and it could be used for implementing a compact sensory feedback system that is able to provide two different types of sensory information from prosthetics.
To boost the usability of a robotic prosthetic hand, providing degrees of freedom to every single finger is inevitable. Under the name of simultaneous proportional control (SPC), many studies have proposed methods to achieve this goal. In this paper, we propose a method to generate a regression model of a neuromuscular system called the Constrained AutoEncoder Network (CAEN) that estimates finger forces using a surface electromyogram (sEMG). Modifying the autoencoder from deep learning, the model is generated in a semi-unsupervised manner where only sEMG data and finger labels are used. In the learning process, the finger labels are used at the central layer of the network, where the three finger forces are estimated, to prevent penetration of other finger signals to each finger node and the network is trained in the constrained manner. This process results in independence among estimated finger forces such that the manipulability of multiple fingers is highly improved. The proposed model was compared with four previously reported SPC models in two tests: offline and online tests. In the offline test, the CAEN showed good results but not the best results. However, in the online test, which involved reaching target positions for three fingers simultaneously and proportionally, the proposed model showed the best results for three of six online performance indices (the completion rate, completion time, and throughput). Emphasizing the independence among estimated finger forces in the training process is the key point of the proposed method distinct from previous studies and the results showed that it was effective in the online control.INDEX TERMS Autoencoder, finger intention estimation, neural network, surface electromyogram (sEMG).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.