Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-1122
|View full text |Cite
|
Sign up to set email alerts
|

The INTERSPEECH 2019 Computational Paralinguistics Challenge: Styrian Dialects, Continuous Sleepiness, Baby Sounds & Orca Activity

Abstract: The INTERSPEECH 2019 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the Styrian Dialects Sub-Challenge, three types of Austrian-German dialects have to be classified; in the Continuous Sleepiness Sub-Challenge, the sleepiness of a speaker has to be assessed as regression problem; in the Baby Sound Sub-Challenge, five types of infant sounds have to be classified; and in the Orca Activity Sub-Challenge, orca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
115
2
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 74 publications
(119 citation statements)
references
References 18 publications
1
115
2
1
Order By: Relevance
“…The so far most comprehensive standard feature set for openSMILE is the Com-ParE set. It is widely known, as it represented the official baseline feature set of the 2013-2019 Computational Paralinguistics ChallengEs (e.g., Schuller et al 2013Schuller et al , 2018Schuller et al , 2019 carried out in connection with the Annual Conferences of the International Speech Communication Association (INTERSPEECH conferences). The ComParE set comprises 6373 acoustic supra-segmental features, so-called higher-level descriptors (HLDs).…”
Section: Feature-based Representationmentioning
confidence: 99%
See 2 more Smart Citations
“…The so far most comprehensive standard feature set for openSMILE is the Com-ParE set. It is widely known, as it represented the official baseline feature set of the 2013-2019 Computational Paralinguistics ChallengEs (e.g., Schuller et al 2013Schuller et al , 2018Schuller et al , 2019 carried out in connection with the Annual Conferences of the International Speech Communication Association (INTERSPEECH conferences). The ComParE set comprises 6373 acoustic supra-segmental features, so-called higher-level descriptors (HLDs).…”
Section: Feature-based Representationmentioning
confidence: 99%
“…It allows for learning data representations from audio time series by means of a recurrent sequence-to-sequence autoencoder approach (Freitag et al 2017). Complementing the brute-force feature extraction tools openSMILE and openXBOW, in 2018 AUDEEP was elected as open-source representation learning tool kit for official baseline evaluation within the ongoing series of INTERSPEECH ComParE Challenges (Schuller et al 2018(Schuller et al , 2019. Thus, in that year, AUDEEP was initially applied to learn representations from preverbal data.…”
Section: Representation Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…More detailed statistics about the dataset and the label distribution are provided in Table 1 and Figure 1, respectively. The task of the challenge is to build a regression model that is able to predict the KSS rating for an audio recording [11].…”
Section: Sleep Corpusmentioning
confidence: 99%
“…In the Orca Activity Challenge [1], underwater audio has to be classified as Orca or Non-Orca sound. Orcas (killer whales) are marine mammals that live in groups in every ocean in the world.…”
Section: Introductionmentioning
confidence: 99%