2019
DOI: 10.1007/s10115-019-01341-6
|View full text |Cite
|
Sign up to set email alerts
|

Relevant feature selection and ensemble classifier design using bi-objective genetic algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 38 publications
0
9
0
Order By: Relevance
“…The comparison is done based on number of features selected and the accuracy of the classifiers used. The feature selection algorithms used are (i) Rough-spanning tree based feature selection algorithm (RMST) [43], (ii) Classification of vocal and non-vocal segments in audio clips using genetic algorithm based feature selection (GAFS) [55] (iii) Relevant feature selection and ensemble classifier design using bi-objective genetic algorithm (RFSA) [56], (iv) Acoustic feature selection for automatic emotion recognition from speech (AFSS) [57], (v) Exploring boundary region of rough set theory for feature selection (RSFS) [52], and (vi) Speech-Based Emotion Recognition: Feature Selection by Self-Adaptive Multi-Criteria Genetic Algorithm (SFGA) [58]. To measure the accuracy of the classifiers based on reduced feature set, we have considered eight different classifiers, namely Support vector machine (SVM), K -nearest neighbors (KNN), Decision tree (DT), Neural network (NN), Random forest (RF), Naïve Bayes (NB), Adaboost (BST), and Sequential minimal optimization (SMO).…”
Section: Evaluation Of Proposed Bffsbr Feature Selection Methodsmentioning
confidence: 99%
“…The comparison is done based on number of features selected and the accuracy of the classifiers used. The feature selection algorithms used are (i) Rough-spanning tree based feature selection algorithm (RMST) [43], (ii) Classification of vocal and non-vocal segments in audio clips using genetic algorithm based feature selection (GAFS) [55] (iii) Relevant feature selection and ensemble classifier design using bi-objective genetic algorithm (RFSA) [56], (iv) Acoustic feature selection for automatic emotion recognition from speech (AFSS) [57], (v) Exploring boundary region of rough set theory for feature selection (RSFS) [52], and (vi) Speech-Based Emotion Recognition: Feature Selection by Self-Adaptive Multi-Criteria Genetic Algorithm (SFGA) [58]. To measure the accuracy of the classifiers based on reduced feature set, we have considered eight different classifiers, namely Support vector machine (SVM), K -nearest neighbors (KNN), Decision tree (DT), Neural network (NN), Random forest (RF), Naïve Bayes (NB), Adaboost (BST), and Sequential minimal optimization (SMO).…”
Section: Evaluation Of Proposed Bffsbr Feature Selection Methodsmentioning
confidence: 99%
“…This work belongs to a class of genetic-based machine learning (GBML). Feature selection through GA has been wrapped up with different classifiers: support vector machine, 35 nearest neighbor classifier, 75 ensemble learning, 76 random forest, 77 and decision trees. 78 Most of the classification processes in the previous work related to texture classification use support vector machines and random forest classifier.…”
Section: Related Workmentioning
confidence: 99%
“…This number may run in thousands. With the collection of all the data, a gene expression profile for the cell is generated [15].…”
Section: Introductionmentioning
confidence: 99%