RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759)
DOI: 10.1109/roman.2004.1374833
|View full text |Cite
|
Sign up to set email alerts
|

Object recognition through human-robot interaction by speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(19 citation statements)
references
References 9 publications
0
19
0
Order By: Relevance
“…There are 256 gray levels in an 8 bit gray scale image, and the intensity of each pixel can have from 0 to 255, with 0 being black and 255 being white. The gray scale of RGB was obtained by determining the average of each pixel as follows [8]:…”
Section: Icose Conference Proceedingsmentioning
confidence: 99%
“…There are 256 gray levels in an 8 bit gray scale image, and the intensity of each pixel can have from 0 to 255, with 0 being black and 255 being white. The gray scale of RGB was obtained by determining the average of each pixel as follows [8]:…”
Section: Icose Conference Proceedingsmentioning
confidence: 99%
“…Most practical acoustic source localization schemes are based on time delay of arrival estimation for the following reasons: such systems are conceptually simple. They are reasonably effective in reverberant environment [3]. Moreover, their low computational complexity makes them well-suited to real-time implementation with several sensors.…”
Section: Mending Robot Hearing Localization Systemmentioning
confidence: 99%
“…Multimodal interfaces [1][12] [13] are considered strong candidates. Thus, we have been developing a helper robot that carries out tasks ordered by the user through voice and/or gestures [9][15] [18] [19]. In addition to gesture recognition, such robots need to have vision systems that can recognize the objects mentioned in speech.…”
Section: Introductionmentioning
confidence: 99%
“…It is, however, difficult to realize vision systems that can work in various conditions. Thus, we have proposed to use the human user's assistance through speech [9][15] [18] [19]. When the vision system cannot achieve a task, the robot makes a question to the user so that the natural response by the user can give helpful information for its vision system.…”
Section: Introductionmentioning
confidence: 99%