Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents 2020
DOI: 10.1145/3383652.3423863
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the Influence of Phoneme-Dependent Dynamic Speaker Directivity of Embodied Conversational Agents' Speech

Abstract: Figure 1: An exemplary speaker directivity for the vowel 'a' at 1600Hz (modulation per direction shown as distance and color (from attenuation (green) to amplification (red)) displayed in front of the used head model including vocal tract within and mouth opening (blue). In the background the used study outdoor scene can be seen.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…Furthermore, some researchers have focused on investigating turn-taking mechanisms in dialogues with the goal of improving human–machine interaction in conversational systems [ 29 ]. For instance, Ehret et al enhanced embodied conversational agents (ECAs) by incorporating non-verbal features such as gestures and gaze to signal turn-taking, thereby making human–machine dialogues smoother and more enjoyable [ 30 ]. In the realm of voice-based human–machine interaction, managing turn-taking in conversations is a crucial area of focus [ 31 , 32 ].…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, some researchers have focused on investigating turn-taking mechanisms in dialogues with the goal of improving human–machine interaction in conversational systems [ 29 ]. For instance, Ehret et al enhanced embodied conversational agents (ECAs) by incorporating non-verbal features such as gestures and gaze to signal turn-taking, thereby making human–machine dialogues smoother and more enjoyable [ 30 ]. In the realm of voice-based human–machine interaction, managing turn-taking in conversations is a crucial area of focus [ 31 , 32 ].…”
Section: Related Workmentioning
confidence: 99%
“…More recently, Ackermann et al [13] demonstrated that the fluctuations created by the movement of the musicians during solo musical performances are audible both under anechoic and reverberant conditions. Similarly, Ehret et al [14] performed a perceptual evaluation involving static and dynamic phoneme-dependent voice directivities. They showed that participants were not able to distinguish phoneme-dependent directivities from averaged directivities and that their subjective preference might not be dependent on the realism of the directional rendering.…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, detailed radiation characteristics have been reported for the singing voice [16,[20][21][22]. Furthermore, Ehret et al evaluated the effect of phoneme-dependent directivity implemented for avatars in a virtual space [23]. Although these reports have disclosed the details of the radiation characteristics of the human voice, the presented information does not directly provide the angular resolution required to reproduce the radiation characteristics of the human voice for listening purposes.…”
Section: Introductionmentioning
confidence: 99%