2017 International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT) 2017
DOI: 10.1109/icicict1.2017.8342835
|View full text |Cite
|
Sign up to set email alerts
|

Music player based on emotion recognition of voice signals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…The accuracy results of [9] and [16] for EMO-DB are 7.59% and 9.16% lower than the accuracy results of the 1BTPDN method. The accuracy result of [24] for EMO-DB is 1.16% and 10.46% lower than the accuracy result of 1BTPDN method.…”
Section: Discussionmentioning
confidence: 63%
See 1 more Smart Citation
“…The accuracy results of [9] and [16] for EMO-DB are 7.59% and 9.16% lower than the accuracy results of the 1BTPDN method. The accuracy result of [24] for EMO-DB is 1.16% and 10.46% lower than the accuracy result of 1BTPDN method.…”
Section: Discussionmentioning
confidence: 63%
“…• Time-Based Features They are zero-crossing rate (ZCR) [8] and amplitudebased features, such as amplitude descriptor, log attack time, attach, delay, sustain, release envelop, short-time energy (STE) [9], shimmer [10], rhythm-based features [8], [11], volume, and temporal centroid [3].…”
Section: Acoustic Features In Ser Literaturementioning
confidence: 99%
“…An emotion-specific multilevel dichotomous classification (EMDC) is employed to compare the performance with direct multiclass classification. Proposal [ 107 ], uses a speech emotion recognition (SER) system that captures human emotion using voice speech signals as an input. Five emotions are recognized, they are, anger, anxiety, boredom, happiness, and sadness.…”
Section: Types Of Activity Monitoring and Methodologiesmentioning
confidence: 99%
“…In contrast with existing state-of-the-art solutions, architecture proposed in other strategies improves recognition effectiveness by 64%. In 2017 Lukose et al [9], Griol et al [10], Mohammadi et al [12] used MFCC, endpoint detection feature selection methods i.e., SVM, ANN, Naive Bayes for SER modules. Finally, 76.31% of devices used the GM model and overall accuracy improved by 1.57% using SVM models.…”
Section: Emo-db Databasementioning
confidence: 99%
“…Each speech dataset is transferred through the pre-processing function to extract the necessary function vector for features from the data. The vector training set is passed on to the correct classifier and the classifier then forecasts the emotion to validate a model [9]. The identification of speech emotions is carried out in four significant steps to generate speech-based output, i.e., acquisition, processing, output generation and the application of the extracted voice feature.…”
Section: Introductionmentioning
confidence: 99%