2017
DOI: 10.1007/978-3-319-58750-9_28
|View full text |Cite
|
Sign up to set email alerts
|

Creating a Gesture-Speech Dataset for Speech-Based Automatic Gesture Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
2

Relationship

2
8

Authors

Journals

citations
Cited by 29 publications
(8 citation statements)
references
References 4 publications
0
7
0
Order By: Relevance
“…The most important goal in gesture generation is to produce motion patterns that are convincing to human observers. Since improvements in objective measures do not always translate into superior subjective quality for human observers, we validated our conclusions above through a number of user studies: two on the Japanese dataset from Takeuchi, Kubota, et al (2017) and one on the English dataset of Ferstl and McDonnell (2018). All of them used the same questionnaire as in the baseline paper Hasegawa et al (2018), shown in Table 2.…”
Section: Subjective Evaluation and Discussionmentioning
confidence: 77%
“…The most important goal in gesture generation is to produce motion patterns that are convincing to human observers. Since improvements in objective measures do not always translate into superior subjective quality for human observers, we validated our conclusions above through a number of user studies: two on the Japanese dataset from Takeuchi, Kubota, et al (2017) and one on the English dataset of Ferstl and McDonnell (2018). All of them used the same questionnaire as in the baseline paper Hasegawa et al (2018), shown in Table 2.…”
Section: Subjective Evaluation and Discussionmentioning
confidence: 77%
“…There are two main methods for obtaining motion data for gesture synthesis: optical motion capture [MYL*16, TKS*17, LDM*19, JSCS19, FM18] or pose estimation from monocular video [YKJ*19, ALIM20, JKEB19, KNN*22, HXM*21].…”
Section: Data‐driven Approachesmentioning
confidence: 99%
“…For our experiments, we used a gesture-speech dataset collected by Takeuchi et al [36]. Motion data were recorded in a motion capture studio from two Japanese individuals having a conversation in the form of an interview.…”
Section: Gesture-speech Datasetmentioning
confidence: 99%