2017
DOI: 10.3906/elk-1606-122
|View full text |Cite
|
Sign up to set email alerts
|

A fast feature selection approach based on extreme learning machine and coefficient of variation

Abstract: Abstract:Feature selection is the method of reducing the size of data without degrading their accuracy. In this study, we propose a novel feature selection approach, based on extreme learning machines (ELMs) and the coefficient of variation (CV). In the proposed approach, the most relevant features are identified by ranking each feature with the coefficient obtained through ELM divided by CV. The achieved accuracies and computational costs, obtained with the use of features selected via the proposed approach i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…The feature selection method is one of the significant issues in the areas of machine learning and it is generally used in the pattern detection and recognition applications [32]. The ELM, which has a high-speed training phase and good generalization capacity, is considered a fine approach for the feature selection method [42][43][44].…”
Section: Elm Based Feature Selection (Elm-fs) Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The feature selection method is one of the significant issues in the areas of machine learning and it is generally used in the pattern detection and recognition applications [32]. The ELM, which has a high-speed training phase and good generalization capacity, is considered a fine approach for the feature selection method [42][43][44].…”
Section: Elm Based Feature Selection (Elm-fs) Methodsmentioning
confidence: 99%
“…Although ELM is generally used as a machine learning method, it has been developed by [32] as a consistent, fast and independent feature selection algorithm with better accuracy [32]. In this method, biases and weights of the hidden layer are allocated indiscriminately, and the output layer's weights are computed with the aid of the Moore-Penrose approach [45,46].The output of ELM, which has one single neuron, can be determined in Equation 3.…”
Section: Elm Based Feature Selection (Elm-fs) Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, these parameters can be chosen as random, and the output weights can be analytically ascertained. This situation allows RNN to be easy and fast in data processing [32] and ensures a fine generalization achievement for a single FFNN [33]. Compared to the other known gradient-based learning algorithms, RNN has several advantages like the potential of reaching the minimum training error, operating with non-differentiable activation functions, and employing a single hidden layer [34].…”
Section: Randomized Neural Networkmentioning
confidence: 99%