2005
DOI: 10.1016/j.specom.2004.09.010
|View full text |Cite
|
Sign up to set email alerts
|

Recognition of affective prosody by speakers of English as a first or foreign language

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
19
1
1

Year Published

2008
2008
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(22 citation statements)
references
References 32 publications
1
19
1
1
Order By: Relevance
“…The results of a study conducted by Dromey, Silveira, and Sandor (2005) suggest that being a native speaker of a language does not guarantee higher scores on emotion recognition ability in that language. In this study, 32 polyglot L1 English speakers performed marginally better than 57 LX English users, but 53 monoglot L1 English speakers did not outperform LX users in detecting affective prosody at the single word level.…”
Section: Predominance Of a Particular Channel?mentioning
confidence: 98%
“…The results of a study conducted by Dromey, Silveira, and Sandor (2005) suggest that being a native speaker of a language does not guarantee higher scores on emotion recognition ability in that language. In this study, 32 polyglot L1 English speakers performed marginally better than 57 LX English users, but 53 monoglot L1 English speakers did not outperform LX users in detecting affective prosody at the single word level.…”
Section: Predominance Of a Particular Channel?mentioning
confidence: 98%
“…In the present work, we classify perception studies of affective speech into two categories: one is focused on the level of accurate recognition of affective states in speech (Scherer et al, 2001;Dromey et al, 2005;Thompson et al, 2004), the other undertakes the perceptual evaluation of prosodic features, such as voice quality and prosodic contour, in their ability to convey affective states (Gobl and Ní Chasaide, 2003;Yanushevskaya et al, 2006;Bänziger and Scherer, 2005;Morel and Bänziger, 2004;Chen, 2005;Rodero, 2011). Here, we are principally interested in the latter one.…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, in many previous studies [3][4][5][6][7][10][11][12], the most popular acoustic features for spoken emotion recognition are prosody features, voice quality features and spectral features. As done in our previous work [31,53], for each emotional utterance, we extracted the aforementioned three typical acoustic features: prosody features, voice quality features and spectral features.…”
Section: Acoustic Feature Extractionmentioning
confidence: 99%
“…As far as feature extraction is concerned, some studies aim at finding the most useful acoustic features related to human emotion expression in speech. The representative acoustic features, used for spoken emotion recognition, contain prosody features [3][4][5][6][7][8][9] such as pitch, intensity, and duration, voice quality features [10][11][12] such as the first three formants (F1, F2, F3), spectral energy distribution, harmonics-to-noise ratio (HNR), as well as spectral features [13][14][15][16][17] such as linear prediction coefficients (LPC), linear prediction cepstral coefficients (LPCC), mel-frequency cepstral coefficients (MFCC). In the aspect of emotion classification, some studies focus on finding various machine learning algorithms to construct a good classifier deciding the underlying emotion categories of speech utterances.…”
Section: Introductionmentioning
confidence: 99%