Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…6 Bad <0. 5 Test not useful Table2 in [10] are satisfactory if it is less than 0.5 the classification technique is not useful.…”
Section: Resultsmentioning
confidence: 97%
See 1 more Smart Citation
“…6 Bad <0. 5 Test not useful Table2 in [10] are satisfactory if it is less than 0.5 the classification technique is not useful.…”
Section: Resultsmentioning
confidence: 97%
“…LDA is a statistical method which "reduces the dimension of the features while maximizing the information preserved in the reduced feature space" [4]. LDA minimizes the within class covariance and maximizes the between class covariance in the given data base of speech samples thereby guaranteeing maximal separability [11] [6]. The general procedure for implementing the LDA is shown in But in this paper we used two different approaches for implementing LDA, Class Dependent LDA and Class Independent LDA.…”
Section: E Discrete Cosine Transformmentioning
confidence: 99%
“…K-Nearest Neighbor is a non-parametric method that classifies speech dataset based on closest training samples in the feature space [2]. In the classification phase, an unlabeled vector is classified by assigning the label which is most frequent among the k training samples nearest to the k point where k is a user-defined constant, Unclassified speech samples are sent into the system to extract speech coefficients and uses model file to classify the speech emotion [19].…”
Section: K-nearest Neighbor (Knn)mentioning
confidence: 99%
“…The first stage of feature extraction and selection can be performed by handpicking features or using neural networks. Previous studies have used prosodic, spectral data, Mel Frequency Cepstral Coefficient (MFCC), Linear prediction cepstral coefficients (LPCC) and formant features (Gudmalwar et al, 2019;Kuchibhotla et al, 2014;Lalitha et al, 2019;Pawar & Kokate, 2021). Training neural networks to extract appropriate features rather than handpicking will solve most of the issues related to spatial-temporal features .…”
Section: Introductionmentioning
confidence: 99%