2015
DOI: 10.1177/0954411915570079
|View full text |Cite|
|
Sign up to set email alerts
|

Multi-motion robots control based on bioelectric signals from single-channel dry electrode

Abstract: This article presents a multi-motion control system to help severe disabled people operate an auxiliary appliance using neck-up bioelectric signals measured by a single-channel dry electrode on the forehead. The single-channel dry-electrode multi-motion control system exhibits several practical advantages over its conventional counterparts that use multi-channel wet-electrodes; among the challenges is an effective technique to extract bioelectric features for reliable implementation of multi degrees-of-freedom… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 34 publications
0
3
0
Order By: Relevance
“…We aimed at delivering a compromise between ease of control and flexibility for assistive applications: according to the proposed hybrid approach, the user retains unconstrained control in steering the robot toward the target object, and engaging autonomous guidance afterwards relieves the user from the burden of fine adjustment of the joints to attain intended postures, maintaining the goal focus. We demonstrated this approach using a desktop robotic arm whose 5+1 degree-of-freedom kinematics are analogous to existing assistive robotic manipulators, aiding clinical translation of the results [3], [5], [7], [38], [50], [65]. This experiment relied on elementary object detection via hue and geometric features, but the approach is viable with arbitrary vision systems, e.g., capable of recognizing objects belonging to specific classes through deep learning techniques; furthermore, it is, in principle, applicable to both image-based and positionbased vision servoing [5], [44], [46], [66]- [68].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We aimed at delivering a compromise between ease of control and flexibility for assistive applications: according to the proposed hybrid approach, the user retains unconstrained control in steering the robot toward the target object, and engaging autonomous guidance afterwards relieves the user from the burden of fine adjustment of the joints to attain intended postures, maintaining the goal focus. We demonstrated this approach using a desktop robotic arm whose 5+1 degree-of-freedom kinematics are analogous to existing assistive robotic manipulators, aiding clinical translation of the results [3], [5], [7], [38], [50], [65]. This experiment relied on elementary object detection via hue and geometric features, but the approach is viable with arbitrary vision systems, e.g., capable of recognizing objects belonging to specific classes through deep learning techniques; furthermore, it is, in principle, applicable to both image-based and positionbased vision servoing [5], [44], [46], [66]- [68].…”
Section: Discussionmentioning
confidence: 99%
“…Yet, many practical applications of assistive robotics would benefit from control of complex mechanics based on a heavily constrained set of bio-signals acquired non-invasively, while allowing the user to focus on their goal rather than on kinematics. This need has driven recourse to either degreesof-freedom reduction approaches, namely, linking axes via pre-determined relationships or postures, or ''sparse'' control schemes, wherein the user selects and activates preestablished motor sequences via ''high-level'' commands with or without the support of a graphical user interface [25], [33], [38]- [42]. Such approach has proven particularly effective when combined with computer vision systems implementing object recognition and partially-autonomous guidance, which can drastically simplify and accelerate the performance of object manipulation tasks using robotic arms and even humanoid robots [5], [42]- [46].…”
Section: Introductionmentioning
confidence: 99%
“…To facilitate interactions between them and smart infrastructure with more efficiency and reliability, intuitive interface using control signals according to these individuals' intention have begun to gain increasing attention in recent years. Much of the recent effort is focused on the remaining body functions of these individuals for interface establishment, including movements of tongue [4], eyeball [5,6], head [7], facial muscle activity [8], and even brain activity signals [9][10][11][12].…”
Section: Introductionmentioning
confidence: 99%