BackgroundSurface electromyography (EMG) signals are often used in many robot and rehabilitation applications because these reflect motor intentions of users very well. However, very few studies have focused on the accurate and proportional control of the human hand using EMG signals. Many have focused on discrete gesture classification and some have encountered inherent problems such as electro-mechanical delays (EMD). Here, we present a new method for estimating simultaneous and multiple finger kinematics from multi-channel surface EMG signals.MethodIn this study, surface EMG signals from the forearm and finger kinematic data were extracted from ten able-bodied subjects while they were tasked to do individual and simultaneous multiple finger flexion and extension movements in free space. Instead of using traditional time-domain features of EMG, an EMG-to-Muscle Activation model that parameterizes EMD was used and shown to give better estimation performance. A fast feed forward artificial neural network (ANN) and a nonparametric Gaussian Process (GP) regressor were both used and evaluated to estimate complex finger kinematics, with the latter rarely used in the other related literature.ResultsThe estimation accuracies, in terms of mean correlation coefficient, were 0.85±0.07, 0.78±0.06 and 0.73±0.04 for the metacarpophalangeal (MCP), proximal interphalangeal (PIP) and the distal interphalangeal (DIP) finger joint DOFs, respectively. The mean root-mean-square error in each individual DOF ranged from 5 to 15%. We show that estimation improved using the proposed muscle activation inputs compared to other features, and that using GP regression gave better estimation results when using fewer training samples.ConclusionThe proposed method provides a viable means of capturing the general trend of finger movements and shows a good way of estimating finger joint kinematics using a muscle activation model that parameterizes EMD. The results from this study demonstrates a potential control strategy based on EMG that can be applied for simultaneous and continuous control of multiple DOF(s) devices such as robotic hand/finger prostheses or exoskeletons.Electronic supplementary materialThe online version of this article (doi:10.1186/1743-0003-11-122) contains supplementary material, which is available to authorized users.
Abstract-This study aims at robotic clothing assistance as it is yet an open field for robotics despite it is one of the basic and important assistance activities in daily life of elderly as well as disabled people. The clothing assistance is a challenging problem since robots must interact with non-rigid clothes generally represented in a high-dimensional space, and with the assisted person whose posture can vary during the assistance. Thus, the robot is required to manage two difficulties to perform the task of the clothing assistance: 1) handling of non-rigid materials and 2) adaptation of the assisting movements to the assisted person's posture. To overcome these difficulties, we propose to use reinforcement learning with the cloth's state which is low-dimensionally represented in topology coordinates, and with the reward defined in the low-dimensional coordinates. With our developed experimental system, for T-shirt clothing assistance, including an anthropomorphic dual-arm robot and a soft mannequin, we demonstrate the robot quickly learns a suitable arm motion for putting the mannequin's head into a T-shirt.
Robotic solutions to clothing assistance can significantly improve the quality-of-life for the elderly and disabled. Real-time estimation of human-cloth relationship is crucial for efficient learning of motor-skills for robotic clothing assistance. The major challenge involved is cloth state estimation due to inherent non-rigidity and occlusion. In this study, we present a novel framework for real-time estimation of cloth state using a low-cost depth sensor making it suitable for a feasible social implementation. The framework relies on the hypothesis that clothing articles are constrained to a low-dimensional latent manifold during clothing tasks. We propose the use of Manifold Relevance Determination (MRD) to learn an offline cloth model which can be used to perform informed cloth state estimation in real-time. The cloth model is trained using observations from motion capture system and depth sensor. MRD provides a principled probabilistic framework for inferring the accurate motioncapture state when only the noisy depth sensor feature state is available in real-time. The experimental results demonstrate that our framework is capable of learning consistent task-specific latent features using few data samples and has the ability to generalize to unseen environmental settings. We further present several factors that affect the predictive performance of the learned cloth state model.
Patients suffering from loss of hand functions caused by stroke and other spinal cord injuries have driven a surge in the development of wearable assistive devices in recent years. In this paper, we present a system made up of a low-profile, optimally designed finger exoskeleton continuously controlled by a user's surface electromyographic (sEMG) signals. The mechanical design is based on an optimal four-bar linkage that can model the finger's irregular trajectory due to the finger's varying lengths and changing instantaneous center. The desired joint angle positions are given by the predictive output of an artificial neural network with an EMG-to-Muscle Activation model that parameterizes electromechanical delay (EMD). After confirming good prediction accuracy of multiple finger joint angles we evaluated an index finger exoskeleton by obtaining a subject's EMG signals from the left forearm and using the signal to actuate a finger on the right hand with the exoskeleton. Our results show that our sEMG-based control strategy worked well in controlling the exoskeleton, obtaining the intended positions of the device, and that the subject felt the appropriate motion support from the device.
Real-time estimation of human-cloth relationship is crucial for efficient learning of motor skills in robotic clothing assistance. However, cloth state estimation using a depth sensor is a challenging problem with inherent ambiguity. To address this problem, we propose the offline learning of a cloth dynamics model by incorporating reliable motion capture data and applying this model for the online tracking of humancloth relationship using a depth sensor. In this study, we evaluate the performance of using a shared Gaussian Process Latent Variable Model in learning the dynamics of clothing articles. The experimental results demonstrate the effectiveness of shared GP-LVM in capturing cloth dynamics using few data samples and the ability to generalize to unseen settings. We further demonstrate three key factors that affect the predictive performance of the trained dynamics model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.