2019
DOI: 10.1109/access.2019.2911968
|View full text |Cite
|
Sign up to set email alerts
|

Development of Distributed Control System for Vision-Based Myoelectric Prosthetic Hand

Abstract: Vision-based myoelectric prosthetic hand uses a camera integrated into its body for object detection and environment understanding, where the results provide necessary information for grasp planning. It is expected that the semi-automatic prosthesis control can be realized with this method. However, such a control method usually suffers from heavy computation due to the requirement of real-time image processing to keep up with the arm movements of the user. This paper presents a distributed control system that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 17 publications
0
17
0
Order By: Relevance
“…Our research group has also been developing a visionbased prosthetic hand based on deep learning technology [35] [36] [37] [38]. In [39], we designed a prosthetic hand control method that can determine the grasping target and motion according to the spatial and temporal relationship between the prosthetic hand and the objects, such as distance, position, and gazing time.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Our research group has also been developing a visionbased prosthetic hand based on deep learning technology [35] [36] [37] [38]. In [39], we designed a prosthetic hand control method that can determine the grasping target and motion according to the spatial and temporal relationship between the prosthetic hand and the objects, such as distance, position, and gazing time.…”
Section: Related Workmentioning
confidence: 99%
“…This input-output relationship is extremely complex and nonlinear, but humans naturally FIGURE 1. Vision-based prosthetic hand [38] obtain the input-output map through learning with a single central nerve network. Referring to the excellent mechanism of human beings, we attempt to implement a single deep learning network in the system and model nonlinear mapping between images, sEMG signals, and prosthetic hand motions in an end-to-end training manner.…”
Section: Introductionmentioning
confidence: 99%
“…5. It uses the same architecture as used in [17]. The network first extracts features from an entire image using the convolutional layers (backbone); then, the features are used for coordinate regression and class classification [30].…”
Section: A Target Object Selectionmentioning
confidence: 99%
“…The control system follows a simple rule that the object which is closest to the center of the image is considered as the target object. This rule is also used in study [4]. Another thing that needs to be concerned is that sometimes the regression of the object location is not quite accurate, and some parts of the object may not be included in the cropped object image.…”
Section: A Object Recognitionmentioning
confidence: 99%
“…Eyes then coordinate the hand to properly grasp the object. Inspired by the way that human grasps objects, many studies integrate cameras in the prosthetic hand control system [1], [2], [3], [4], [5], [6], [7], [8]. Such systems accept images as input and extract necessary information (e.g.…”
Section: Introductionmentioning
confidence: 99%