2005
DOI: 10.1177/0278364904049250
|View full text |Cite
|
Sign up to set email alerts
|

Interactive Multimodal Robot Programming

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(9 citation statements)
references
References 46 publications
0
9
0
Order By: Relevance
“…Natural human interfaces for richer HRI have been explored for service robots. Vision-based interfaces in mobile robot navigation [18,[27][28][29], haptic feedback interfaces in tele-operations and surgeries [3,30,31], voice-based interfaces [32,33], and multimodal interfaces which integrate two or more interaction modals [34,35], are some of the most commonly used approaches.…”
Section: Hri Modals and Devicesmentioning
confidence: 99%
“…Natural human interfaces for richer HRI have been explored for service robots. Vision-based interfaces in mobile robot navigation [18,[27][28][29], haptic feedback interfaces in tele-operations and surgeries [3,30,31], voice-based interfaces [32,33], and multimodal interfaces which integrate two or more interaction modals [34,35], are some of the most commonly used approaches.…”
Section: Hri Modals and Devicesmentioning
confidence: 99%
“…The use of gestures to guide robots (both humanoids and non-humanoids) attracted much attention in the recent ten years [Triesch and von der Malsburg, 1998;Nickel and Stiefelhagen, 2007;Iba et al, 2005]. Xu et al [2007] compared the effectiveness of gesture interface to using a joystick for controlling a miniature robot.…”
Section: Related Workmentioning
confidence: 99%
“…The first question that has to be answered is what kind of nonverbal behavior and gestures will be used by the human operator? Answering this question is essential as most gesture recognition systems require a pre-specified set of gestures to be recognized [Xu et al, 2007;Iba et al, 2005]. The first goal of this study is to shed light on the patterns of gesture usage during a well-specified interaction scenario, which is collaborative navigation.…”
Section: Introductionmentioning
confidence: 99%
“…The new skill built by the user is represented as a Sequence Function Chart (SFC) that makes it possible to build multiple programming structures, such as single sequences, sequence selection (selection mode), simultaneous sequences (parallel mode) and loops, and not just a serial sequence of actions, as in another similar works like [11], [12] or [13].…”
Section: Introductionmentioning
confidence: 99%
“…In [12] a vacuum robot is programmed for a room cleaning task. Verbal commands and gestures are translated into a sequence of cleaning actions.…”
Section: Introductionmentioning
confidence: 99%