2007 International Conference on Control, Automation and Systems 2007
DOI: 10.1109/iccas.2007.4406660
|View full text |Cite
|
Sign up to set email alerts
|

An application of speech/speaker recognition system for human-robot interaction

Abstract: We will introduce a real time, robust speech/speaker recognition system for isolated word recognition using distance microphone. Applying proposed system to a robot platform, robust human-robot interaction can be established for reverberant office environments. For computational effectiveness, dynamic time warping algorithm is used for pattern matching. We select the gamma distribution contrary to the conventional Gaussian distribution to model the probability density function of total accumulated distance. By… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2012
2012
2012
2012

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 5 publications
0
1
0
Order By: Relevance
“…For instance, robots that are teleoperated may receive human input from interfaces such as one-and two-handed controllers, a computer mouse, or a keyboard (e.g., Takayama, Marder-Eppstein, Harris, & Beer, 2011). Conversely, semi-autonomous robots may receive human input from shared control methods, such as demonstration (Billard, Calinon, Ruediger, & Schaal, 2008), direct physical interfaces/manipulation (Chen & Kemp, 2011), gesture recognition (Charles et al, 2009;Gielniak, & Thomaz, 2011), laser pointers (Nguyen, Jain, Anderson, & Kemp, 2008;Kemp et al, 2008), or voice command (Hyun, Gyeongho, & Youngjin, 2007). With so many control methods being developed, how should designers determine which to use?…”
Section: Introductionmentioning
confidence: 99%
“…For instance, robots that are teleoperated may receive human input from interfaces such as one-and two-handed controllers, a computer mouse, or a keyboard (e.g., Takayama, Marder-Eppstein, Harris, & Beer, 2011). Conversely, semi-autonomous robots may receive human input from shared control methods, such as demonstration (Billard, Calinon, Ruediger, & Schaal, 2008), direct physical interfaces/manipulation (Chen & Kemp, 2011), gesture recognition (Charles et al, 2009;Gielniak, & Thomaz, 2011), laser pointers (Nguyen, Jain, Anderson, & Kemp, 2008;Kemp et al, 2008), or voice command (Hyun, Gyeongho, & Youngjin, 2007). With so many control methods being developed, how should designers determine which to use?…”
Section: Introductionmentioning
confidence: 99%