2018
DOI: 10.1016/j.neucom.2017.09.049
|View full text |Cite
|
Sign up to set email alerts
|

Efficient and effective strategies for cross-corpus acoustic emotion recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
33
0
2

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 74 publications
(36 citation statements)
references
References 18 publications
1
33
0
2
Order By: Relevance
“…Interestingly, approaches in the AVEC 2018 CES did not employ approaches such as transfer learning [80,81] or domain adaptation techniques [29,54] typically seen in cross-cultural testing. In [76], the authors proposed a model based on emotional salient detection to identify emotion markers invariant to sociocultural context.…”
Section: Cross-cultural Emotion Recognitionmentioning
confidence: 99%
“…Interestingly, approaches in the AVEC 2018 CES did not employ approaches such as transfer learning [80,81] or domain adaptation techniques [29,54] typically seen in cross-cultural testing. In [76], the authors proposed a model based on emotional salient detection to identify emotion markers invariant to sociocultural context.…”
Section: Cross-cultural Emotion Recognitionmentioning
confidence: 99%
“…This is a popular metric in the area of ASR, that to consider unweighted rather than weighted average recall. The reason is that unaffected by a change in class frequency [26,27]. As seen in Table 1, it is calculated by average between the Recall of Class 0 and the Recall of Class 1.…”
Section: Metrics For Performance Evaluationmentioning
confidence: 99%
“…The data-balancing technique adopted is based on the downsample [20], SMOTE [23] and ADASYN [24]. It is also investigated the existence or not of the "Accuracy Paradox" phenomenon, and which performance metrics should be better suited to assess classifiers, such as G-mean [25] and UAR [26,27]. For study validation, this work analyzes educational data of students from the Integrated Courses (high school with training in professional education through technical courses) updated in January 2018 for the Federal Institute of Rio Grande do Norte (IFRN), Brazil.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, a facial recognition system has been applied to evaluate the quality of distance teachings [18]. In speech analysis, emotion recognition is implemented by using the extreme learning machine (ELM) [19]. Music, in which emotions are expressed, can be analyzed to tell the difference between contemporary commercial music and classical singing techniques [20].…”
Section: Related Workmentioning
confidence: 99%
“…By using a fast Fourier transformation, the frequency features (60 power features, 16 power difference features) were prepared. In each channel, the power features were computed on four frequency bands, i.e., theta (4-8 Hz), alpha (8-12 Hz), beta (12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) and gamma (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45). Power difference features were employed to detect the variation in cerebral activity between the left and right cortical areas.…”
Section: Feature Extraction and The Target Emotion Classesmentioning
confidence: 99%