The ability of robotic rehabilitation devices to support paralysed end-users is ultimately limited by the degree to which human-machine-interaction is designed to be effective and efficient in translating user intention into robotic action. Specifically, we evaluate the novel possibility of binocular eye-tracking technology to detect voluntary winks from involuntary blink commands, to establish winks as a novel low-latency control signal to trigger robotic action. By wearing binocular eye-tracking glasses we enable users to directly observe their environment or the actuator and trigger movement actions, without having to interact with a visual display unit or user interface. We compare our novel approach to two conventional approaches for controlling robotic devices based on electromyo-graphy (EMG) and speech-based human-computer interaction technology. We present an integrated software framework based on ROS that allows transparent integration of these multiple modalities with a robotic system. We use a soft-robotic SEM glove (Bioservo Technologies AB, Sweden) to evaluate how the 3 modalities support the performance and subjective experience of the end-user when movement assisted. All 3 modalities are evaluated in streaming, closed-loop control operation for grasping physical objects. We find that wink control shows the lowest error rate mean with lowest standard deviation of (0.23 ± 0.07, mean ± SEM) followed by speech control (0.35 ± 0. 13) and EMG gesture control (using the Myo armband by Thalamic Labs), with the highest mean and standard deviation (0.46 ± 0.16). We conclude that with our novel own developed eye-tracking based approach to control assistive technologies is a well suited alternative to conventional approaches, especially when combined with 3D eye-tracking based robotic end-point control.
Motion intention detection is fundamental in the implementation of human-machine interfaces applied to assistive robots. In this paper, multiple machine learning techniques have been explored for creating upper limb motion prediction models, which generally depend on three factors: the signals collected from the user (such as kinematic or physiological), the extracted features and the selected algorithm. We explore the use of different features extracted from various signals when used to train multiple algorithms for the prediction of elbow flexion angle trajectories. The accuracy of the prediction was evaluated based on the mean velocity and peak amplitude of the trajectory, which are sufficient to fully define it. Results show that prediction accuracy when using solely physiological signals is low, however, when kinematic signals are included, it is largely improved. This suggests kinematic signals provide a reliable source of information for predicting elbow trajectories. Different models were trained using 10 algorithms. Regularization algorithms performed well in all conditions, whereas neural networks performed better when the most important features are selected. The extensive analysis provided in this study can be consulted to aid in the development of accurate upper limb motion intention detection models.
Stroke can be a devastating condition that impairs the upper limb and reduces mobility. Wearable robots can aid impaired users by supporting performance of Activities of Daily Living (ADLs). In the past decade, soft devices have become popular due to their inherent malleable and low-weight properties that makes them generally safer and more ergonomic. In this study, we present an improved version of our previously developed gravity-compensating upper limb exosuit and introduce a novel hand exoskeleton. The latter uses 3D-printed structures that are attached to the back of the fingers which prevent undesired hyperextension of joints. We explored the feasibility of using this integrated system in a sample of 10 chronic stroke patients who performed 10 ADLs. We observed a significant reduction of 30.3 ± 3.5% (mean ± standard error), 31.2 ± 3.2% and 14.0 ± 5.1% in the mean muscular activity of the Biceps Brachii (BB), Anterior Deltoid (AD) and Extensor Digitorum Communis muscles, respectively. Additionally, we observed a reduction of 14.0 ± 11.5%, 14.7 ± 6.9% and 12.8 ± 4.4% in the coactivation of the pairs of muscles BB and Triceps Brachii (TB), BB and AD, and TB and Pectoralis Major (PM), respectively, typically associated to pathological muscular synergies, without significant degradation of healthy muscular coactivation. There was also a significant increase of elbow flexion angle (12.1±1.5°). These results further cement the potential of using lightweight wearable devices to assist impaired users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.