2020
DOI: 10.3390/s20216097
|View full text |Cite
|
Sign up to set email alerts
|

A Multimodal Intention Detection Sensor Suite for Shared Autonomy of Upper-Limb Robotic Prostheses

Abstract: Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI tasks, demanding simultaneous, agile, and precise control of multiple degrees-of-freedom (DoFs) while following a specific timing pattern in the joint and human–robot task spaces. Most commercially availabl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 17 publications
(20 citation statements)
references
References 38 publications
0
20
0
Order By: Relevance
“…Results showed a unique user-specific pattern of performance modulation of the man-machine interface and a high potential for enhancing the accuracy of the system. Future perspectives will focus on (a) collection of a clinical dataset for testing the proposed armband on a larger group of amputees to run a clinical statistical study, (b) analysis of the effect of normal force distribution on acceleration-based MMG for comparison, (c) evaluation of hand gesture detection during the transient phase of the signal, and (d) incorporation of multimodal sensing to address sensitivity to motion induced artifacts (demonstrated in [18], [19]). Finally, the system has been patented [47], providing a basis for subsequent translational work for widespread patient use.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Results showed a unique user-specific pattern of performance modulation of the man-machine interface and a high potential for enhancing the accuracy of the system. Future perspectives will focus on (a) collection of a clinical dataset for testing the proposed armband on a larger group of amputees to run a clinical statistical study, (b) analysis of the effect of normal force distribution on acceleration-based MMG for comparison, (c) evaluation of hand gesture detection during the transient phase of the signal, and (d) incorporation of multimodal sensing to address sensitivity to motion induced artifacts (demonstrated in [18], [19]). Finally, the system has been patented [47], providing a basis for subsequent translational work for widespread patient use.…”
Section: Discussionmentioning
confidence: 99%
“…Examples of the use of MMG technology are the evaluation of muscle fatigue [11], muscle strength [12], balance [13], muscle functions [14], or the analysis of the mechanical response of the muscle after exercise [15]. Other applications of MMG technology include the examination of neuromuscular disorders [16], and prosthetic limb/robotic control [17]- [19]. In comparison to EMG, this mechanical method (i.e., MMG) may be classically considered as a modality with a lower information transfer rate, in particular regarding the neurophysiology of the muscle activation and peripheral nervous system.…”
Section: A Motivationmentioning
confidence: 99%
“…Another low-cost multimodal sensor was presented by Gardner et al [221]. It consists of an acoustic MMG sensor, a nine-DOF IMU sensor, and a video camera.…”
Section: Biopotentials and Image-based Systemsmentioning
confidence: 99%
“…Another relevant application is myoelectric control of prostheses [335,336], which allows users to recover lost functionality by controlling a prosthetic robotic device with their remaining muscle activity. In [337], computer vision (for autonomous object recognition) and mechanomyography (to estimate the intended muscle activation) data are fused to conduct a shared control that predicts user intent for grasp and then realizes it. In [338], once the user establishes a pre-contact between the robotic hand and an object (manual task), the shared controller optimizes the actuation of the fingers of a robotic hand to maximize the contact between the hand and object to obtain full-contact (robotic task).…”
Section: Task Decomposition Based Shared Controlmentioning
confidence: 99%