2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA) 2016
DOI: 10.1109/apsipa.2016.7820708
|View full text |Cite
|
Sign up to set email alerts
|

Speech emotion classification using multiple kernel Gaussian process

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…Similarly, from Table 3 Furthermore, the performance of the proposed SER system is compared with different works in the Table 4 for EMO-DB and Table 5 for IEMOCAP in terms of the number of optimized features and classification accuracy performance measures. In [46], semi-NMF with k-means clustering initialization was used to transform feature sets, which were further combined with the original dataset to obtain a total of 72 features for SER obtaining 77.74%…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, from Table 3 Furthermore, the performance of the proposed SER system is compared with different works in the Table 4 for EMO-DB and Table 5 for IEMOCAP in terms of the number of optimized features and classification accuracy performance measures. In [46], semi-NMF with k-means clustering initialization was used to transform feature sets, which were further combined with the original dataset to obtain a total of 72 features for SER obtaining 77.74%…”
Section: Resultsmentioning
confidence: 99%
“…Similarly, from Table 3 for the IEMOCAP database, the highest accuracy is achieved with the feature fusion of the optimized MFCC, LPCC, and TEO-AutoCorr features with 53 features obtaining 83.2% accuracy using SVM and 78% accuracy using the Furthermore, the performance of the proposed SER system is compared with different works in the Table 4 for EMO-DB and Table 5 for IEMOCAP in terms of the number of optimized features and classification accuracy performance measures. In [46], semi-NMF with k-means clustering initialization was used to transform feature sets, which were further combined with the original dataset to obtain a total of 72 features for SER obtaining 77.74% accuracy. In [47][48][49][50][51][52], different optimizing and feature selection techniques, namely enhanced kernel isometric mapping, the modified supervised locally linear embedding algorithm, sparse partial least squares regression, sequential floating forward selection, the scaled conjugate gradient, and principal component analysis, were used for improving the classification accuracy by reducing the feature set dimension.…”
Section: Resultsmentioning
confidence: 99%
“…It can be seen from Figure 4 (2) Verification of APFOA-MKGPR Four benchmark functions are used to verify the prediction accuracy of the proposed APFOA-MKGPR model, which is compared with other four models: GPR, BMA-MKGPR in Yu et al (2013b), LG-MKGPR in Chen et al (2017), and MKGPR in Archambeau and Bach (2010). 1000 samples are randomly generated and used as the training data, and other 100 samples are randomly generated and used as the testing data.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…Tian et al (2016) adopted a mixed kernel function with a combination of the squared exponential kernel and the rational quadratic kernel to construct the multi-kernel function in GPR. Chen et al (2017) used a combination of the linear kernel and radial basis function kernel to form a mixed kernel function of GPR. Ding et al (2017) presented a new multi-resolution kernel approximation algorithm for GPR.…”
Section: Introductionmentioning
confidence: 99%
“…Some researchers have also suggested different classifiers; in the brain emotional learning model (BEL) (Mustaqeem et al (2020)), a multilayer perceptron (MLP) and adaptive neuro-fuzzy inference system are combined for SER. The multikernel Gaussian process (GP) (Chen et al (2016b)) is another proposed classification strategy with two related notions. These provide for learning in the algorithm by combining two functions: the radial basis function (RBF) and the linear kernel function.…”
mentioning
confidence: 99%