2017 International Conference on Big Data Analytics and Computational Intelligence (ICBDAC) 2017
DOI: 10.1109/icbdaci.2017.8070805
|View full text |Cite
|
Sign up to set email alerts
|

Emotion recognition on speech signals using machine learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(7 citation statements)
references
References 13 publications
0
7
0
Order By: Relevance
“…In kNN, anger became 86.8%, fear became 93.7%, joy became 83.6%, neutral became 95.9%, and sadness became 96.3%. M. Ghai [8] selected the frame samples of the sound signals at 16000Hz and the selection duration 0.25 seconds of each frame for feature extraction. A. Iqbal [13] extracted 34 audio features from two datasets (RAVDESS and SAVEE) and selected frame size 0.05s and step size 0.025s.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In kNN, anger became 86.8%, fear became 93.7%, joy became 83.6%, neutral became 95.9%, and sadness became 96.3%. M. Ghai [8] selected the frame samples of the sound signals at 16000Hz and the selection duration 0.25 seconds of each frame for feature extraction. A. Iqbal [13] extracted 34 audio features from two datasets (RAVDESS and SAVEE) and selected frame size 0.05s and step size 0.025s.…”
Section: Related Workmentioning
confidence: 99%
“…The weighted accuracy was 68.1% and unweighted accuracy was 67% of this model. Di↵erent frame sizes of 10-20 ms [6,8], etc., were selected in di↵erent works. Entropy, spectral entropy, MFCC, ZCR (zero-crossing rate), pitch, energy, etc., were the common features for audio data.…”
Section: Related Workmentioning
confidence: 99%
“…To measure user experience, Alves et al [16] suggest that the main methods used are observation, think aloud, contextual, interviews/inquiries, prototyping, task analysis, cognitive walkthrough and questionnaires. Conventional methods for collecting and subsequently assessing user opinions are post-interaction, as a retrospective verbal or written self-report questionnaires [17][18][19]. However, despite allowing useful analysis, these methods depend on the users' interpretation and memory, as well as the accuracy and quality of the answers [5].…”
Section: User Experiencementioning
confidence: 99%
“…Mohan G hai [12] in 2017 The main objective of this paper to speech emotion recognition and categorized into seven emotional states like anger, boredom, disgust, anxiety, happiness, sadness and neutral. The given method is based on the Mel Frequency Cepstral Coefficients (MFCC) used the Berlin database of emotional speech.…”
Section: Audio Analysis Andmentioning
confidence: 99%