This research focuses on the minimum process of classifying three upper arm movements (elbow extension, shoulder extension, combined shoulder and elbow extension) of humans with three electromyography (EMG) signals, to control a 2-degrees of freedom (DoF) robotic arm. The proposed minimum process consists of four parts: time divisions of data, Teager–Kaiser energy operator (TKEO), the conventional EMG feature extraction (i.e., the mean absolute value (MAV), zero crossings (ZC), slope-sign changes (SSC), and waveform length (WL)), and eight major machine learning models (i.e., decision tree (medium), decision tree (fine), k-Nearest Neighbor (KNN) (weighted KNN, KNN (fine), Support Vector Machine (SVM) (cubic and fine Gaussian SVM), Ensemble (bagged trees and subspace KNN). Then, we compare and investigate 48 classification models (i.e., 47 models are proposed, and 1 model is the conventional) based on five healthy subjects. The results showed that all the classification models achieved accuracies ranging between 74–98%, and the processing speed is below 40 ms and indicated acceptable controller delay for robotic arm control. Moreover, we confirmed that the classification model with no time division, with TKEO, and with ensemble (subspace KNN) had the best performance in accuracy rates at 96.67, recall rates at 99.66, and precision rates at 96.99. In short, the combination of the proposed TKEO and ensemble (subspace KNN) plays an important role to achieve the EMG classification.
This article sought to address issues related to human-robot cooperation tasks focusing especially on robotic operation using bio-signals. In particular, we propose to develop a control scheme for a robot arm based on electromyography (EMG) signal that allows a cooperative task between humans and robots that would enable teleoperations. A basic framework for achieving the task and conducting EMG signals analysis of the motion of upper limb muscles for mapping the hand motion is presented. The objective of this work is to investigate the application of a wearable EMG device to control a robot arm in real-time. Three EMG sensors are attached to the brachioradialis, biceps brachii, and anterior deltoid muscles as targeted muscles. Three motions were conducted by moving the arm about the elbow joint, shoulder joint, and a combination of the two joints giving a two degree of freedom. Five subjects were used for the experiments. The results indicated that the performance of the system had an overall accuracy varying from 50% to 100% for the three motions for all subjects. This study has further shown that upper-limb motion discrimination can be used to control the robotic manipulator arm with its simplicity and low computational cost.
The purpose of this paper is to quickly and stably achieve grasping objects with a 3D robot arm controlled by electrooculography (EOG) signals. A EOG signal is a biological signal generated when the eyeballs move, leading to gaze estimation. In conventional research, gaze estimation has been used to control a 3D robot arm for welfare purposes. However, it is known that the EOG signal loses some of the eye movement information when it travels through the skin, resulting in errors in EOG gaze estimation. Thus, EOG gaze estimation is difficult to point out the object accurately, and the object may not be appropriately grasped. Therefore, developing a methodology to compensate, for the lost information and increase spatial accuracy is important. This paper aims to realize highly accurate object grasping with a robot arm by combining EMG gaze estimation and the object recognition of camera image processing. The system consists of a robot arm, top and side cameras, a display showing the camera images, and an EOG measurement analyzer. The user manipulates the robot arm through the camera images, which can be switched, and the EOG gaze estimation can specify the object. In the beginning, the user gazes at the screen’s center position and then moves their eyes to gaze at the object to be grasped. After that, the proposed system recognizes the object in the camera image via image processing and grasps it using the object centroid. The object selection is based on the object centroid closest to the estimated gaze position within a certain distance (threshold), thus enabling highly accurate object grasping. The observed size of the object on the screen can differ depending on the camera installation and the screen display state. Therefore, it is crucial to set the distance threshold from the object centroid for object selection. The first experiment is conducted to clarify the distance error of the EOG gaze estimation in the proposed system configuration. As a result, it is confirmed that the range of the distance error is 1.8–3.0 cm. The second experiment is conducted to evaluate the performance of the object grasping by setting two thresholds from the first experimental results: the medium distance error value of 2 cm and the maximum distance error value of 3 cm. As a result, it is found that the grasping speed of the 3 cm threshold is 27% faster than that of the 2 cm threshold due to more stable object selection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.