2022
DOI: 10.3389/frai.2021.744476
|View full text |Cite
|
Sign up to set email alerts
|

Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping

Abstract: The complexity and dexterity of the human hand make the development of natural and robust control of hand prostheses challenging. Although a large number of control approaches were developed and investigated in the last decades, limited robustness in real-life conditions often prevented their application in clinical settings and in commercial products. In this paper, we investigate a multimodal approach that exploits the use of eye-hand coordination to improve the control of myoelectric hand prostheses. The an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
15
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(16 citation statements)
references
References 48 publications
1
15
0
Order By: Relevance
“…Various possible factors responsible for the poor long-term robustness of HGR have been studied and could be summarised as muscle fatigue, skin conductivity, limb position, electrode displacement, and signal variation over days [10], [11], [12], [13], [14]. To address these issues, we believe that rejecting uncertain predictions [15], [16], [17] is the most straightforward approach compared to others such as multimodal approaches [6], [18], [19], adaptive learning [20], [21], [22], [23], [24], and alternative training protocols [24], [25], [26], [27].…”
Section: Introductionmentioning
confidence: 99%
“…Various possible factors responsible for the poor long-term robustness of HGR have been studied and could be summarised as muscle fatigue, skin conductivity, limb position, electrode displacement, and signal variation over days [10], [11], [12], [13], [14]. To address these issues, we believe that rejecting uncertain predictions [15], [16], [17] is the most straightforward approach compared to others such as multimodal approaches [6], [18], [19], adaptive learning [20], [21], [22], [23], [24], and alternative training protocols [24], [25], [26], [27].…”
Section: Introductionmentioning
confidence: 99%
“…7 Using the dataset for offline testing, they proposed a multimodal method (first-person image and EMG signals) to detect the grasp types. 8 This method used the gaze point to decide the target object according to the pixel distance between the object mask and the gaze point. Similarly, some studies used eye movement signals to judge which is the target object after detecting multiple objects from the image.…”
Section: Introduction 1| Related Backgroundmentioning
confidence: 99%
“…create a multimodal dataset for intelligent prosthetics, including gaze, visual, myoelectric, and inertial data of grasps 7 . Using the dataset for offline testing, they proposed a multimodal method (first‐person image and EMG signals) to detect the grasp types 8 . This method used the gaze point to decide the target object according to the pixel distance between the object mask and the gaze point.…”
Section: Introductionmentioning
confidence: 99%
“…All these aspects concur to identify the grasp planned by the user, which determines the correct hand pre-shape, wrist orientation and hand closure, that would be otherwise unknown to the control system. Interestingly, it has been shown that the inclusion of visual information significantly increases the average grasp type classification accuracy [9]. Specifically, in shared autonomy, the hand aperture and wrist orientation can be controlled automatically, based on visual input, while the closure of the fingers can be left to the user.…”
Section: Introductionmentioning
confidence: 99%