Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms. However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy. We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.
A convenient and effective binocular vision system is set up. Gesture information can be accurately extract from the complex environment with the system. The template calibration method is used to calibrate the binocular camera and the parameters of the camera are accurately obtained. In the phase of stereo matching, the BM algorithm is used to quickly and accurately match the images of the left and right cameras to get the parallax of the measured gesture. Combined with triangulation principle, resulting in a more dense depth map. Finally, the depth information is remapped to the original color image to realize three-dimensional reconstruction and three-dimensional cloud image generation. According to the cloud image information, it can be judged that the binocular vision system can effectively segment the gesture from the complex background.
Feature extraction is one of most important step in the control of multifunctional prosthesis based on surface electromyography (sEMG) pattern recognition. In this paper, a new sEMG feature extraction method based on muscle active region is proposed. This paper designs an experiment to classify four hand motions using different features. This experiment is used to prove that new features have better classification performance. The experimental results show that the new feature, active muscle regions (AMR), has better classification performance than other traditional features, mean absolute value (MAV), waveform length (WL), zero crossing (ZC) and slope sign changes (SSC). The average classification errors of AMR, MAV, WL, ZC and SSC are 13%, 19%, 26%, 24% and 22% respectively. The new EMG features are based on the mapping relationship between hand movements and forearm active muscle regions. This mapping relationship has been confirmed in medicine. We obtain the active muscle regions data from the original EMG signal by the new feature extraction algorithm. The results obtained from this algorithm can well represent hand motions. On the other hand, the new feature vector size is much smaller than other features. The new feature can narrow the computational cost. This prove the AMR can improve sEMG pattern recognition accuracy rate.
In order to study and analyse human hand motions which contain multimodal information, a generalised framework integrating multiple sensors is proposed and consists of modules of sensor integration, signal preprocessing, correlation study of sensory information and motion identification. Three types of sensors are integrated to simultaneously capture the finger angle trajectories, the hand contact forces and the forearm electromyography (EMG) signals. To facilitate the rapid acquisition of human hand tasks, methods to automatically synchronise and segment manipulation primitives are developed in the signal preprocessing module. Correlations of the sensory information are studied by using Empirical Copula and demonstrate there exist significant relationships between muscle signals and finger trajectories and between muscle signals and contact forces. In addition, recognising different hand grasps and manipulations based on the EMG signals is investigated by using Fuzzy Gaussian Mixture Models (FGMMs) and results of comparative experiments show FGMMs outperform Gaussian Mixture Models (GMMs) and Support Vector Machine (SVM) with a higher recognition rate. The proposed framework integrating the stateof-the-art sensor technology with the developed algorithms provides researchers a versatile and adaptable platform for human hand motion analysis and has potential applications especially in robotic hand or prosthetic hand control and Human Computer Interaction (HCI).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.