IEEE/RSJ International Conference on Intelligent Robots and System
DOI: 10.1109/irds.2002.1043875
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modal human-machine communication for instructing robot grasping tasks

Abstract: A major challenge for the realization of intelligent robots is to supply them with cognitive abilities in order to allow ordinary users to program them easily and intuitively. One way of such programming is teaching work tasks by interactive demonstration. To make this effective and convenient for the user, the machine must be capable to establish a common focus of attention and be able to use and integrate spoken instructions, visual perceptions, and non-verbal clues like gestural commands. We report progress… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
0

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 59 publications
(46 citation statements)
references
References 16 publications
0
46
0
Order By: Relevance
“…The GRAVIS robot system [20] combines visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation to allow multi-modal task-oriented instructions. For manipulation tasks this setup employs a standard 6-DOF PUMA manipulator operated with the real-time RCCL-command library together with a 9-DOF dextrous robot hand developed at the Technical University of Munich (TUM).…”
Section: A Gravis Robot System and Tum Handmentioning
confidence: 99%
See 2 more Smart Citations
“…The GRAVIS robot system [20] combines visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation to allow multi-modal task-oriented instructions. For manipulation tasks this setup employs a standard 6-DOF PUMA manipulator operated with the real-time RCCL-command library together with a 9-DOF dextrous robot hand developed at the Technical University of Munich (TUM).…”
Section: A Gravis Robot System and Tum Handmentioning
confidence: 99%
“…The implementations of step 1 of our grasp strategy differ between the two robot hand setups in use. The TUM Hand setup allows the human instructor to identify an object to be grasped by speech, pointing gestures, or a combination thereof [20]. The 3D position of the referred object is resolved by a stereo vision system to an accuracy of about 3 cm.…”
Section: Portable Grasp Strategymentioning
confidence: 99%
See 1 more Smart Citation
“…Starting from a previous robotics setup developed in the course of the special research unit "SFB 360" and providing a large number of specialised processing modules [20], we have implemented a robot system ( fig. 8) whose grasping abilities connect sensori-motor control with vision and language understanding.…”
Section: Towards Higher Cognitionmentioning
confidence: 99%
“…Within this domain, McGuire et al 3 for example have developed a system for attending to pointing hands in reference to objects of interest. First, the robot's vision system focuses on human hand gestures (i.e.…”
Section: Introductionmentioning
confidence: 99%