2012
DOI: 10.14198/jopha.2012.6.1.02
|View full text |Cite
|
Sign up to set email alerts
|

Engaging human-to-robot attention using conversational gestures and lip-synchronization

Abstract: Abstract-Human-Robot Interaction (HRI) is one of the most important subfields of social robotics. In several applications, text-to-speech (TTS) techniques are used by robots to provide feedback to humans. In this respect, a natural synchronization between the synthetic voice and the mouth of the robot could contribute to improve the interaction experience. This paper presents an algorithm for synchronizing Text-To-Speech systems with robotic mouths. The proposed approach estimates the appropriate aperture of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…However, the study details the limitations of the lip-synchronisation application when decoding live input as the system neglected to recognise the variability in the tonality and pitch of different human voices which frequently produced incorrect lip positions. A similar study [25] explored the implementation of speech wave signal processing to create jaw movement in robots, derived from previous studies by [26] and [27]. Although the mouth articulation system was successful in correctly analysing incoming audio to detect frequency on/off status for synchronisation with the open/closed mouth positions of the robots, this study and the previous examples neglect lip synchronisation and focus solely on jaw position to incoming sound frequencies.…”
Section: The Uncanny Valleymentioning
confidence: 99%
“…However, the study details the limitations of the lip-synchronisation application when decoding live input as the system neglected to recognise the variability in the tonality and pitch of different human voices which frequently produced incorrect lip positions. A similar study [25] explored the implementation of speech wave signal processing to create jaw movement in robots, derived from previous studies by [26] and [27]. Although the mouth articulation system was successful in correctly analysing incoming audio to detect frequency on/off status for synchronisation with the open/closed mouth positions of the robots, this study and the previous examples neglect lip synchronisation and focus solely on jaw position to incoming sound frequencies.…”
Section: The Uncanny Valleymentioning
confidence: 99%