1999
DOI: 10.1109/5254.809564
|View full text |Cite
|
Sign up to set email alerts
|

Gesture-based programming for robotics: human-augmented software adaptation

Abstract: Gesture-Based Programming is a paradigm for programming robots by human demonstration in which the human demonstrator directs the self-adaptation of executable software. The goal is to provide a more natural environment for the user as programmer and to generate more complete and successful programs by focusing on task experts rather than programming experts. We call the paradigm "gesture-based" because we try to enable the system to capture, in real-time, the intention behind the demonstrator's fleeting, cont… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2002
2002
2019
2019

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 23 publications
0
5
0
Order By: Relevance
“…The system is able to model high-level task specifications but not the sensor feedback during contact. Voyles, Morrow, and Khosla (1999) proposed a gesture-based programming paradigm where the system is assumed to have a set of basic skills (also referred to as a priori control policies (Kortenkamp, Bonasso, and Subramanian 2001), or sensori-motor primitives (Morrow and Khosla 1997)) from which the system can compose programs. Human demonstration is observed through gesture recognition and interpretation agents, and the correct skills are selected based on the votes from the agents.…”
Section: Robot Programmingmentioning
confidence: 99%
“…The system is able to model high-level task specifications but not the sensor feedback during contact. Voyles, Morrow, and Khosla (1999) proposed a gesture-based programming paradigm where the system is assumed to have a set of basic skills (also referred to as a priori control policies (Kortenkamp, Bonasso, and Subramanian 2001), or sensori-motor primitives (Morrow and Khosla 1997)) from which the system can compose programs. Human demonstration is observed through gesture recognition and interpretation agents, and the correct skills are selected based on the votes from the agents.…”
Section: Robot Programmingmentioning
confidence: 99%
“…W81XWH-18-1-0769. 1 School of Engineering Technology, 2 School of Industrial Engineering, Purdue University, IN 47907, USA mbalakun@purdue.edu, lvenkate@purdue.edu, jpadmaku@purdue.edu, rvoyles@purdue.edu, jpwachs@purdue.edu Fig. 1: Super Baxter robot testbed with multimodal sensors or force-based tasks rather than a pure kinematic task.…”
Section: Introductionmentioning
confidence: 99%
“…Hence the interface used for demonstrating the task plays a crucial role in LfD approaches, it determines the richness of data available to learn the task. Interfaces like sensorized gloves [2], or kinesthetic teaching (hand-held guiding) [3] enables the robot to acquire sufficient information to learn force signatures of tasks during the demonstration. But are not intuitive for the human demonstrator.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, several techniques have been proposed that use human motion measurements directly as robot teaching data for automatic programming: teaching by showing [2], assembly plan from observation [3][4], gesture-based programming [5], and robot learning [6][7][8][9][10]. Applications for dual arm robots [11] have also been presented.…”
Section: Introductionmentioning
confidence: 99%