2017
DOI: 10.1109/tbme.2017.2677902
|View full text |Cite
|
Sign up to set email alerts
|

3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments

Abstract: It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
32
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(34 citation statements)
references
References 55 publications
0
32
0
Order By: Relevance
“…Human eye tracking data have also been used in the closed loop control of robotic arms. Recently, Li et al (2017) demonstrated how 3D gaze tracking could be used to enable individuals with impaired mobility to control a robotic arm in an intuitive manner. Diverging from traditional gaze tracking approaches that leverage two-dimensional (2D) egocentric camera videos, Li et al presented methods for estimating object location and pose from gaze points reconstructed in 3D.…”
mentioning
confidence: 99%
“…Human eye tracking data have also been used in the closed loop control of robotic arms. Recently, Li et al (2017) demonstrated how 3D gaze tracking could be used to enable individuals with impaired mobility to control a robotic arm in an intuitive manner. Diverging from traditional gaze tracking approaches that leverage two-dimensional (2D) egocentric camera videos, Li et al presented methods for estimating object location and pose from gaze points reconstructed in 3D.…”
mentioning
confidence: 99%
“…In the early 2000's, the eyetracker was used as a direct substitute for a handheld mouse such that the gaze point on a computer display designates the cursor's position, and blinks function as button clicks (Lin et al, 2006 ; Gajwani and Chhabria, 2010 ). Since 2015, eye gaze has been used to communicate a 3D target position (Li et al, 2015a , 2017 ; Dziemian et al, 2016 ; Li and Zhang, 2017 ; Wang et al, 2018 ; Zeng et al, 2020 ) for directing the movement of the robotic end effector. No action recognition was required, as these methods assumed specific actions in advance, such as reach and grasp (Li et al, 2017 ), write and draw (Dziemian et al, 2016 ), and pick and place (Wang et al, 2018 ).…”
Section: Related Workmentioning
confidence: 99%
“…When target points are outside the calibration plane, 2D gaze estimation methods result in an error of gaze points, i.e., the parallel error [20], due to the offset between scene cameras and eyes. To this end, some methods use additional input features related to depth coordinates of gaze points, such as pupil distances [5] or Purkinje images [21]. It is highlighted that the depth error of gaze estimation is generally significant, as these additional features have an indirect and weak correlation with depth coordinates of gaze points.…”
Section: Related Workmentioning
confidence: 99%
“…Gaze tracking is mainly used for attention analysis [1], [2], human-computer interaction [3], and human-robot interaction (HRI) [4], [5]. Li et al [5] recently reported the first attempt to achieve intuitive HRI using only gaze signals, which is helpful for disabled people with upper limb motor impairment, such as amputees and paralyzed patients, to reconstruct their upper limb motion abilities. In mobile applications, such as gaze-based intuitive HRI, headmounted 3D gaze trackers are preferred to table-mounted gaze trackers [6].…”
Section: Introductionmentioning
confidence: 99%