This paper provides a comprehensive survey of current state of the bio-sensing technologies focusing on hand motion capturing and its application to interfacing hand prostheses. These sensing techniques include electromyography (EMG), sonomyography (SMG), mechnomyography (MMG), electroneurography (ENG), electroencephalograhy (EEG), electrocorticography (ECoG), intracortical neural interfaces, near infrared spectroscopy (NIRS), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), etc. Relevant approaches that interpret bio-signals in the view of prosthetic hand manipulation are involved in as well. Multi-modal sensory fusion provides a new strategy in this area, and the latest multi-modal sensing techniques are surveyed. The paper also outlines the new challenges and directions: exploration of robust sensing technology, multi-modal sensory fusion, on-line signal processing and learning algorithms and bio-feedbacks.
It is evident that surface electromyography (sEMG) based human-machine interfaces (HMI) have inherent difficulty in predicting dexterous musculoskeletal movements such as finger motions. This paper is an attempt to investigate a plausible alternative to sEMG, ultrasound-driven HMI, for dexterous motion recognition due to its characteristic of detecting morphological changes of deep muscles and tendons. A multi-channel A-mode ultrasound lightweight device is adopted to evaluate the performance of finger motion recognition; an experiment is designed for both widely acceptable offline and online algorithms with eight able-bodied subjects employed. The experiment result presents that the offline recognition accuracy is up to 98.83% ± 0.79%. The real-time motion completion rate is 95.4% ± 8.7% and online motion selection time is 0.243 ± 0.127 s. The outcomes confirm the feasibility of A-mode ultrasound based wearable HMI and its prosperous applications in prosthetic devices, virtual reality, and remote manipulation.
Summary
With the continuous development of sensor technology, the acquisition cost of RGB‐D images is getting lower and lower, and gesture recognition based on depth images and Red‐Green‐Blue (RGB) images has gradually become a research direction in the field of pattern recognition. However, most of the current processing methods for RGB‐D gesture images are relatively simple, ignoring the relationship and influence between its two modes, and unable to make full use of the correlation factors between different modes. In view of the above problems, this paper optimizes the effect of RGB‐D information processing by considering the independent features and related features of multi‐modal data to construct a weight adaptive algorithm to fuse different features. Simulation experiments show that the method proposed in this paper is better than the traditional RGB‐D gesture image processing method and the gesture recognition rate is higher. Comparing the current more advanced gesture recognition methods, the method proposed in this paper also achieves higher recognition accuracy, which verifies the feasibility and robustness of this method.
Robot grasping technology is a hot spot in robotics research. In relatively fixed industrialized scenarios, using robots to perform grabbing tasks is efficient and lasts a long time. However, in an unstructured environment, the items are diverse, the placement posture is random, and multiple objects are stacked and occluded each other, which makes it difficult for the robot to recognize the target when it is grasped and the grasp method is complicated. Therefore, we propose an accurate, real‐time robot grasp detection method based on convolutional neural networks. A cascaded two‐stage convolutional neural network model with course to fine position and attitude was established. The R‐FCN model was used as the extraction of the candidate frame of the picking position for screening and rough angle estimation, and aiming at the insufficient accuracy of the previous methods in pose detection, an Angle‐Net model is proposed to finely estimate the picking angle. Tests on the Cornell dataset and online robot experiment results show that the method can quickly calculate the optimal gripping point and posture for irregular objects with arbitrary poses and different shapes. The accuracy and real‐time performance of the detection have been improved compared to previous methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.