2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR) 2013
DOI: 10.1109/icorr.2013.6650447
|View full text |Cite
|
Sign up to set email alerts
|

Integrated vision-based robotic arm interface for operators with upper limb mobility impairments

Abstract: An integrated, computer vision-based system was developed to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In this paper, a gesture recognition interface system developed specifically for individuals with upper-level spinal cord injuries (SCIs) was combined with object tracking and face recognition systems to be an efficient, hands-free WMRM controller. In this test system, two Kinect cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…We aimed at delivering a compromise between ease of control and flexibility for assistive applications: according to the proposed hybrid approach, the user retains unconstrained control in steering the robot toward the target object, and engaging autonomous guidance afterwards relieves the user from the burden of fine adjustment of the joints to attain intended postures, maintaining the goal focus. We demonstrated this approach using a desktop robotic arm whose 5+1 degree-of-freedom kinematics are analogous to existing assistive robotic manipulators, aiding clinical translation of the results [3], [5], [7], [38], [50], [65]. This experiment relied on elementary object detection via hue and geometric features, but the approach is viable with arbitrary vision systems, e.g., capable of recognizing objects belonging to specific classes through deep learning techniques; furthermore, it is, in principle, applicable to both image-based and positionbased vision servoing [5], [44], [46], [66]- [68].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We aimed at delivering a compromise between ease of control and flexibility for assistive applications: according to the proposed hybrid approach, the user retains unconstrained control in steering the robot toward the target object, and engaging autonomous guidance afterwards relieves the user from the burden of fine adjustment of the joints to attain intended postures, maintaining the goal focus. We demonstrated this approach using a desktop robotic arm whose 5+1 degree-of-freedom kinematics are analogous to existing assistive robotic manipulators, aiding clinical translation of the results [3], [5], [7], [38], [50], [65]. This experiment relied on elementary object detection via hue and geometric features, but the approach is viable with arbitrary vision systems, e.g., capable of recognizing objects belonging to specific classes through deep learning techniques; furthermore, it is, in principle, applicable to both image-based and positionbased vision servoing [5], [44], [46], [66]- [68].…”
Section: Discussionmentioning
confidence: 99%
“…A substantial challenge in this area is to develop effective human-machine interfaces and paradigms for robot control, with the available technologies differing substantially in their residual motor function requirements, command throughput, ease of use, technical complexity and cost. At the bottom end of the spectrum, joystick (or micro-switch) control is suitable mainly for patients with at least partially preserved hand function (e.g., as after hemispheric stroke), being inexpensive and highly effective for driving, e.g., motorized wheelchairs and assistive arms [3], [4]; gesture-based control via low-cost camera systems is also emerging as a suitable alternative, posing less stringent requirements on upper limb and head movement capability [5], [6]. Highly accurate control of assistive devices has repeatedly been demonstrated based on superficially-recorded face-muscle electromyographic (EMG) and/or electrooculographic (EOG) signals; although harvesting information from these signals is generally more expensive and technically-demanding compared to micro-switch and camera-based interfaces, at a minimum, it only requires integrity of the cranial nerves function, which is generally preserved in patients with spinal lesions [7]- [12].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To facilitate the operation of robots for users with different levels of physical ability, many human-robot interactions (HRI) interfaces utilizing residual limb abilities have been studied, such as using chin [6], shoulder [7], gesture [8], and eye movement [9]. In these studies, the interaction operations mapped the residual limb movement to robot instructions, such as forward, back, left, right, rotation, or other cartesian motions for the robotic arm, and some preset simple household tasks, which could help users perform some structured tasks, but still needing frequent limb movement.…”
Section: Introductionmentioning
confidence: 99%
“…EMG signals have also been used to control an upper limb exoskeleton in [15]. Finally, body movement was used in [16] and [17] with camera-based systems and inertial measurement units (IMUs), respectively. In [18], contralateral shoulder motions were related to hand muscle stimulation by an external shoulder position transducer.…”
Section: Introductionmentioning
confidence: 99%