2020 Chinese Control and Decision Conference (CCDC) 2020
DOI: 10.1109/ccdc49329.2020.9164823
|View full text |Cite
|
Sign up to set email alerts
|

Speech Emotion Recognition of Teachers in Classroom Teaching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 4 publications
0
4
0
Order By: Relevance
“…Academic emotion evaluation in online learning for boosting learner wellbeing [67]. Emotion recognition can be used for providing a personalized support as feedback to learners [54]- [57], [63], [64]; for enhancing efficient training [58]; for enhancing outcome learning and instruction quality [59]; for identifying learner comprehension [60]; for enhancing quality of teaching [65]; for extracting information from comment in online course learning [66]; for providing adaptable elearning [68]; for presenting adaptive course content [69]; for supporting teacher training [70]. FER2013, CK+ and JAFFE dataset consist of six basic emotions (Angry, Disgust, Fear, Happy, Sad, Surprise), and Neutral.…”
Section: The Functions Of Prolmentioning
confidence: 99%
“…Academic emotion evaluation in online learning for boosting learner wellbeing [67]. Emotion recognition can be used for providing a personalized support as feedback to learners [54]- [57], [63], [64]; for enhancing efficient training [58]; for enhancing outcome learning and instruction quality [59]; for identifying learner comprehension [60]; for enhancing quality of teaching [65]; for extracting information from comment in online course learning [66]; for providing adaptable elearning [68]; for presenting adaptive course content [69]; for supporting teacher training [70]. FER2013, CK+ and JAFFE dataset consist of six basic emotions (Angry, Disgust, Fear, Happy, Sad, Surprise), and Neutral.…”
Section: The Functions Of Prolmentioning
confidence: 99%
“…The frequency (hertz) and amplitude of colour dimension are converted to log scale and decibel (dB) scale, respectively, to form the spectrogram over time. The mel spectrogram can visualise speech signals on the mel‐scale and generate various patterns [11, 14, 28] too train a machine learning (ML) or deep learning (DL)‐based classifier [29–31]. However, this method incorporates more computational complexity than the YAAPT algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…Experimental results show that the improved model's mAP is increased by 4%, the F1 score is increased by 3.2%, and the detection time is reduced by 1/3. Liang et al [23] focused on teacher's voice signals and designs an emotion detection audio processing system. Teachers' speeches are used to determine their emotions.…”
Section: Introductionmentioning
confidence: 99%