Robotics: Science and Systems XVIII 2022
DOI: 10.15607/rss.2022.xviii.025
|View full text |Cite
|
Sign up to set email alerts
|

Gaze Complements Control Input for Goal Prediction During Assisted Teleoperation

Abstract: Shared control systems can make complex robot teleoperation tasks easier for users. These systems predict the user's goal, determine the motion required for the robot to reach that goal, and combine that motion with the user's input. Goal prediction is generally based on the user's control input (e.g., the joystick signal). In this paper, we show that this prediction method is especially effective when users follow standard noisily optimal behavior models. In tasks with input constraints like modal control, ho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 29 publications
1
4
0
Order By: Relevance
“…4). This is consistent with the findings in Madduri et al [42] and Aronson et al [3], where the former showed that their task-trained adaptive interface improved tracking performance within minutes of training and the latter showed that natural gaze assisted in goal predicting for device control. However, as expected, the conventional mouse interface enables significantly better tracking than both myoelectric interfaces, underscoring the challenges in designing personalized myoelectric interfaces with high-dimensional inputs.…”
Section: Discussionsupporting
confidence: 92%
See 2 more Smart Citations
“…4). This is consistent with the findings in Madduri et al [42] and Aronson et al [3], where the former showed that their task-trained adaptive interface improved tracking performance within minutes of training and the latter showed that natural gaze assisted in goal predicting for device control. However, as expected, the conventional mouse interface enables significantly better tracking than both myoelectric interfaces, underscoring the challenges in designing personalized myoelectric interfaces with high-dimensional inputs.…”
Section: Discussionsupporting
confidence: 92%
“…Several studies have also integrated people’s natural gaze in human-robot interaction and have proved its advantages in anticipating user selections or intentions, thus improving system efficiency and task accuracy [1, 3, 11, 30, 63]. Aronson et al [3] used natural gaze to predict users’ task goals and then used the predictions to improve a learning algorithm for manual (joystick)-based robot manipulation. Similarly, this study also leverages gaze to approximate the user’s desired actions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Aronson and Admoni proposed an intent inference method for gaze-based shared autonomy systems [25]. A Partially Observable Markov Decision Process model used joystick and eye tracker signals in order to update probability distributions for candidate target objects.…”
Section: B Shared Autonomy Systems For Gaze-based Robot Controlmentioning
confidence: 99%
“…Compared to gripper-based manipulators, teleoper-ating dexterous hand-arm systems poses unprecedented challenges and often requires specialized apparatus that comes with high costs and setup efforts, such as Virtual Reality (VR) devices [4,17,15], wearable gloves [29,30], handheld controller [45,46,20], haptic sensors [12,23,50,53], or motion capture trackers [65]. Fortunately, recent developments in vision-based teleoperation [2,24,16,26,42,27,21,22,3] have provided a low-cost and more generalizable alternative for teleoperating dexterous robot systems.…”
Section: Introductionmentioning
confidence: 99%