2001
DOI: 10.1109/mis.2001.1183338
|View full text |Cite
|
Sign up to set email alerts
|

Building a multimodal human-robot interface

Abstract: However, the situation becomes a bit more complex when we begin to build and interact with machines or robots that either look like humans or have human functionalities and capabilities. Then, people well might interact with their humanlike machines in ways that mimic humanhuman communication.For example, if a robot has a face, a human might interact with it similarly to how humans interact with other creatures with faces. Specifically, a human might talk to it, gesture to it, smile at it, and so on. If a huma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
74
0

Year Published

2002
2002
2019
2019

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 177 publications
(75 citation statements)
references
References 6 publications
1
74
0
Order By: Relevance
“…Nontraditional input devices such as voice [12] and gesture [19] are of interest for novice robot control as well as in assistive robotics for those who have lost function or fine motor skills. Another technology under consideration is braincomputer interfaces (BCIs).…”
Section: Related Workmentioning
confidence: 99%
“…Nontraditional input devices such as voice [12] and gesture [19] are of interest for novice robot control as well as in assistive robotics for those who have lost function or fine motor skills. Another technology under consideration is braincomputer interfaces (BCIs).…”
Section: Related Workmentioning
confidence: 99%
“…Perzanowski et al provided multimodal interface in which the user can specify the target location by tapping a map on a PDA screen [4]. Lundberg et al also proposed a PDA interface for a robot, where the user can specify the area for the robot to explore [5].…”
Section: Robot Controlmentioning
confidence: 99%
“…Thus, a well-defined multimodal command set combining verbal and nonverbal messages would help users of home-use robots. Perzanowski et al developed a multimodal human-robot interface that enables users to give commands combining spoken commands and pointing gestures (Perzanowski et al, 2001). In the system, spoken commands are analysed using a speech-to-text system and a natural language understanding system that parses text strings.…”
Section: Multimodal Command Languagementioning
confidence: 99%