Proceedings of the 3rd International Symposium on Movement and Computing 2016
DOI: 10.1145/2948910.2948944
|View full text |Cite
|
Sign up to set email alerts
|

The i-Treasures Intangible Cultural Heritage dataset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 3 publications
0
10
0
Order By: Relevance
“…By means of a text presented during the announcement of the test participants have been informed before the study starts about: the general goals and duration of the online survey; the payment that can be expected when finishing the survey; and the fact that their answers will be used to establish average responses to the questions for a scientific publication on audio quality. Data Availability Statement: Datasets used in this paper: The singing dataset used in this paper was created by combining CREL Research Database (SVDB) [52], NUS sung and spoken lyrics corpus [53], the Byzantine singing from the i-Treasures Intangible Cultural Heritage dataset [54], PJS phonemebalanced Japanese singing-voice corpus [55], JVS-MuSiC [56], Tohoku Kiritan and Itako singing database [57], as well as internal singing databases used for the IRCAM Singing Synthesizer [59] and other projects. For speech we use a combined dataset of VCTK [50] and Att-HACK [51].…”
Section: Future Workmentioning
confidence: 99%
“…By means of a text presented during the announcement of the test participants have been informed before the study starts about: the general goals and duration of the online survey; the payment that can be expected when finishing the survey; and the fact that their answers will be used to establish average responses to the questions for a scientific publication on audio quality. Data Availability Statement: Datasets used in this paper: The singing dataset used in this paper was created by combining CREL Research Database (SVDB) [52], NUS sung and spoken lyrics corpus [53], the Byzantine singing from the i-Treasures Intangible Cultural Heritage dataset [54], PJS phonemebalanced Japanese singing-voice corpus [55], JVS-MuSiC [56], Tohoku Kiritan and Itako singing database [57], as well as internal singing databases used for the IRCAM Singing Synthesizer [59] and other projects. For speech we use a combined dataset of VCTK [50] and Att-HACK [51].…”
Section: Future Workmentioning
confidence: 99%
“…In (12), w represents the weight, b corresponds to the deviation, and h t−1 denotes the output value corresponding to the time t − 1. Meanwhile, x t refers to the input value, σ represents the activation function, and f stands for the forget gate.…”
Section: Skeleton Movement Recognition Based On Optimizedmentioning
confidence: 99%
“…All experiments are trained on the same dataset as [16], which is a combined dataset of CREL Research Database (SVDB) [18], NUS sung and spoken lyrics corpus [19], from the i-Treasures Intangible Cultural Heritage dataset [20] PJS phoneme-balanced Japanese singing-voice corpus [21], JVS-MuSiC [22], Tohoku Kiritan and Itako singing database [23], VocalSet: A singing voice dataset [24], as well as singing recordings from our internal singing databases used for the IR-CAM Singing Synthesizer [25] and other projects.…”
Section: Datamentioning
confidence: 99%