2021
DOI: 10.11591/ijece.v11i6.pp5438-5449
|View full text |Cite
|
Sign up to set email alerts
|

Emotion recognition from syllabic units using k-nearest-neighbor classification and energy distribution

Abstract: In this article, we present an automatic technique for recognizing emotional states from speech signals. The main focus of this paper is to present an efficient and reduced set of acoustic features that allows us to recognize the four basic human emotions (anger, sadness, joy, and neutral). The proposed features vector is composed by twenty-eight measurements corresponding to standard acoustic features such as formants, fundamental frequency (obtained by Praat software) as well as introducing new features base… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…In that 206 are normal images and 113 are breast cancer images [22]. The proposed approach is compared with other classifiers like Naive Bayes (NB) [23], k-nearest neighbor (KNN) [24], decision tree (DT) [23], support vector machine (SVM) [24] and random forest (RF) [25].…”
Section: Resultsmentioning
confidence: 99%
“…In that 206 are normal images and 113 are breast cancer images [22]. The proposed approach is compared with other classifiers like Naive Bayes (NB) [23], k-nearest neighbor (KNN) [24], decision tree (DT) [23], support vector machine (SVM) [24] and random forest (RF) [25].…”
Section: Resultsmentioning
confidence: 99%
“…A first work [46], published in 2021, highlighted this finding. Indeed, by working on plosive consonants extracted from the MADED natural database, and by proposing a set of features based essentially on the energy and its variations, quite satisfactory recognition rates were obtained using the K-nearest neighbors (K-NN) algorithm.…”
Section: Experimental Validationmentioning
confidence: 86%
“…But there is a profound question regarding the selection of features and methods in human emotion classification, as the performance of the model is highly based upon these parameters. Emotions can be recognized through speech [1], [2], facial expressions [3] [6], textual data [7], [8], body gestures, audio-visual features [9], human interaction with devices and physiological signals such as electroencephalogram (EEG), electrocardiogram [10], electromyogram, electrodermal activity, skin temperature, and respiration. Over the years, EEG has gained a lot of attraction as a powerful technique for exploring the study of emotions due to its inexpensiveness and portable nature.…”
Section: Introductionmentioning
confidence: 99%