Proceedings of the 2001 Workshop on Perceptive User Interfaces 2001
DOI: 10.1145/971478.971483
|View full text |Cite
|
Sign up to set email alerts
|

Human-robot interface based on the mutual assistance between speech and vision

Abstract: This paper presents a user interface for a service robot that can bring the objects asked by the user. Speech-based interface is appropriate for this application. However, it alone is not sufficient. The system needs a vision-based interface to recognize gestures as well. Moreover, it needs vision capabilities to obtain the real world information about the objects mentioned in the user's speech. For example, the robot needs to find the target object ordered by speech to carry out the task. This can be consider… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2003
2003
2019
2019

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 3 publications
0
5
0
Order By: Relevance
“…Natural-language interfaces can be applied to make the use of intelligent robots more flexible, such as in the case of the autonomous mobile two-arm robot KAMRO (Lueth et al 1994). Combination of gesture and voice strategies gives place to multimodal interfaces, providing a large range of interactions, intuitive to humans and advantageous to service robots (Yoshizaki et al 2001). There is evidence that multimodal displays and input controls have great potential towards improving the teleoperation performance (Chen et al 2007).…”
Section: State Of the Artmentioning
confidence: 99%
“…Natural-language interfaces can be applied to make the use of intelligent robots more flexible, such as in the case of the autonomous mobile two-arm robot KAMRO (Lueth et al 1994). Combination of gesture and voice strategies gives place to multimodal interfaces, providing a large range of interactions, intuitive to humans and advantageous to service robots (Yoshizaki et al 2001). There is evidence that multimodal displays and input controls have great potential towards improving the teleoperation performance (Chen et al 2007).…”
Section: State Of the Artmentioning
confidence: 99%
“…An innovation of the work described here involves developing cattle and human behaviour models, perception techniques, and using them along with a specific herding technique, called low-stress herding (Smith, 1998), to carry out assistive or autonomous herding activities. Some researchers have attempted to communicate with assistive robots by developing speech and gesture recognition systems, to convey their intentions to the robot (Fischer et al, 1996, Topp et al, 2004, Yoshizaki et al, 2001. One differentiating aspect of our work is that no overt speech or gesture communication between the human and robot is used.…”
Section: Introductionmentioning
confidence: 99%
“…Natural language interfaces can be applied to make the use of intelligent robots more flexible, such as in the case of the autonomous mobile two-arm robot KAMRO [235]. Combination of gesture and voice strategies give place to multimodal interfaces, providing a large range of interactions, intuitive to humans and advantageous to service robots [236]. There is evidence that multimodal displays and input controls have great potential towards improving the teleoperation performance [208].…”
Section: Teleoperationmentioning
confidence: 99%