2016 American Control Conference (ACC) 2016
DOI: 10.1109/acc.2016.7526794
|View full text |Cite
|
Sign up to set email alerts
|

Using visuomotor tendencies to increase control performance in teleoperation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
5

Relationship

2
8

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 14 publications
0
9
0
Order By: Relevance
“…Current assistance with (semi-)autonomous agents has focused on approaching/reaching tasks in teleoperation (Khoramshahi and Billard (2018); Michelman and Allen (2002); Kaupp et al (2010); Mulling et al (2015)), however, it is not sufficient to satisfy the tele-grasping and telemanipulation of objects. The methods to provide assistance in approaching-yet may not work as well in grasping scenarios-include envelope motion constraints (Abbott et al (2007); Webb et al (2016)), manually selective assistance levels (Feygin et al (2002); Li and Okamura (2003)), and shared control policies such as linear blending (Aarno et al (2005); Dragan and Srinivasa (2013)). Linear blending strategies may not entirely work as the motion constraints from the manual operator's perspective and the fully autonomous perspective may differ.…”
Section: Related Workmentioning
confidence: 99%
“…Current assistance with (semi-)autonomous agents has focused on approaching/reaching tasks in teleoperation (Khoramshahi and Billard (2018); Michelman and Allen (2002); Kaupp et al (2010); Mulling et al (2015)), however, it is not sufficient to satisfy the tele-grasping and telemanipulation of objects. The methods to provide assistance in approaching-yet may not work as well in grasping scenarios-include envelope motion constraints (Abbott et al (2007); Webb et al (2016)), manually selective assistance levels (Feygin et al (2002); Li and Okamura (2003)), and shared control policies such as linear blending (Aarno et al (2005); Dragan and Srinivasa (2013)). Linear blending strategies may not entirely work as the motion constraints from the manual operator's perspective and the fully autonomous perspective may differ.…”
Section: Related Workmentioning
confidence: 99%
“…Gaze estimation enables the study of visual perception mechanisms in humans, and has been used in many fields, such as action recognition [1], situation awareness estimation [2], and driver attention analysis [3]. It is also a non-verbal communication method, and thus, it can be applied to shared autonomy [4] or teleoperation [5] in the context of Human-Robot Interaction (HRI).…”
Section: Introductionmentioning
confidence: 99%
“…A system has been developed to enable users to specify a 2D end-effector path via a click-and-drag operation, and the collision avoidance is implemented with a sampling-based motion planner (Nicholas et al, 2013). The operator's gaze is employed to indicate the target, and then the robotic arm is guided to reach the target, both by utilizing potential fields for autonomy (Webb et al, 2016). In the abovementioned paradigms, since the low-level robot motions are exclusively realized with the motion planner-based autonomy without the involvement of users, the user can regain the control authority only when the reactive behavior (e.g., collision avoidance) finishes (Kim et al, 2006).…”
Section: Introductionmentioning
confidence: 99%