Interspeech 2016 2016
DOI: 10.21437/interspeech.2016-766
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Analysis of Typical and Atypical Encoding of Spontaneous Emotion in the Voice of Children

Abstract: Children with Autism Spectrum Disorders (ASD) present significant difficulties to understand and express emotions. Systems have thus been proposed to provide objective measurements of acoustic features used by children suffering from ASD to encode emotion in speech. However, only a few studies have exploited such systems to compare different groups of children in their ability to express emotions, and even less have focused on the analysis of spontaneous emotion. In this contribution, we provide insights by ex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 27 publications
(15 citation statements)
references
References 25 publications
0
14
1
Order By: Relevance
“…Given the strong performance of COMPARE in similar tasks [17,33], our results are weaker than expected. COMPARE feature set can be considered an omnibus feature set for paralinguistic tasks [34], and has been used successfully in the past for similar tasks of automatic diagnosis for ASC child vocalisations [17], as well as more recently for classifying typically developing children and children on the autism spectrum [33]. As our corpus size is relativity small (803 instances), and the dimensionality of COMPARE large (6373 features) we speculate the use of COMPARE introduced undesirable noise, which may have negatively impacted the result.…”
Section: Resultscontrasting
confidence: 95%
See 1 more Smart Citation
“…Given the strong performance of COMPARE in similar tasks [17,33], our results are weaker than expected. COMPARE feature set can be considered an omnibus feature set for paralinguistic tasks [34], and has been used successfully in the past for similar tasks of automatic diagnosis for ASC child vocalisations [17], as well as more recently for classifying typically developing children and children on the autism spectrum [33]. As our corpus size is relativity small (803 instances), and the dimensionality of COMPARE large (6373 features) we speculate the use of COMPARE introduced undesirable noise, which may have negatively impacted the result.…”
Section: Resultscontrasting
confidence: 95%
“…We investigate the suitability of three Interspeech Computational Paralinguistics Challenge features sets from 2009 (IS09-Emotion) [11], 2010 (IS10-Paraling) [12], and 2013 (COMPARE) [13]. These representations, COMPARE in particular, have been found suitable for similar classification tasks between the speech of typical or atypically developing children [13][14][15][16], and for recognising spontaneous emotional expressions in the vocalisations of ASC children [17].…”
Section: Introductionmentioning
confidence: 99%
“…For transparency and reproducibility, we exploited the SMILE feature extraction toolkit [10] to extract two widely used audio feature sets in the eld of computational paralinguistic; the extended Geneva Minimalistic Acoustic Parameter Set ( G MAPS) and the large-scale Interspeech 2013 Computational Paralinguistics Challenge feature set C P E [10]. Both sets have been successfully utilised in the eld of affective computing [8], and recently investigated for the automatic diagnosis of ASC in children's voices [25,28].…”
Section: Acousticmentioning
confidence: 99%
“…Since abnormal prosody has also been reported as a core marker of ASC [12], paralinguistic cues appear, on the other hand, be er suited for the automatic detection. Suprasegmental acoustic features relating to articulation, loudness, pitch, and rhythm have indeed shown promising results for children's speech [4,19,21,25]. ese acoustic features have also been successfully used in speech-based interaction systems for improving social skills of children su ering from ASC [18,20].…”
Section: Introductionmentioning
confidence: 99%
“…to be useful as they can be computed with close to real-time capabilities and even provide better performance Ringeval et al (2016). In this study, we combined two expert-knowledge feature sets that have shown robustness in emotional speech recognition: eGeMAPS and MFCCs.…”
Section: Smaller Specific Sets Of Low Level Descriptors (Llds) Have Rmentioning
confidence: 99%