2011
DOI: 10.1007/s11063-011-9186-9
|View full text |Cite
|
Sign up to set email alerts
|

Weighting Efficient Accuracy and Minimum Sensitivity for Evolving Multi-Class Classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 21 publications
0
9
0
Order By: Relevance
“…The number of nodes (S) in the hidden layer in the OP-ELM algorithm is set at the beginning to 100, since this algorithm prunes the useless neurons from the hidden layer. 3,24] improves the original ELM by using a Differential Evolution (DE) algorithm [26] and determines the optimal number of hidden nodes by a cross-validation procedure. The E − ELM CV uses the evolutionary technique to optimize the input weights and the Moore-Penrose generalized inverse to analytically determine the output weights.…”
Section: Elm Algorithms Selectedmentioning
confidence: 99%
See 1 more Smart Citation
“…The number of nodes (S) in the hidden layer in the OP-ELM algorithm is set at the beginning to 100, since this algorithm prunes the useless neurons from the hidden layer. 3,24] improves the original ELM by using a Differential Evolution (DE) algorithm [26] and determines the optimal number of hidden nodes by a cross-validation procedure. The E − ELM CV uses the evolutionary technique to optimize the input weights and the Moore-Penrose generalized inverse to analytically determine the output weights.…”
Section: Elm Algorithms Selectedmentioning
confidence: 99%
“…Table 5 reports the average running time (considering also the cross-validation and the test time) of the algorithms considered. All the experiments were run using a common Matlab framework proposed in [8,24]. The proposed algorithm was developed and included in the above-mentioned framework.…”
Section: Time Complexity Analysismentioning
confidence: 99%
“…Acc values range from 0 to 100 and they represent a global performance on the classification task, not being suitable for imbalanced datasets (Sánchez-Monedero et al, 2011).…”
Section: Performance Evaluation Metricsmentioning
confidence: 99%
“…Considering the above definitions, ordinal classification is different from nominal classification in the evaluation of the classifier performance and also in the fact that the classifier should exploit the ordinal data disposition. For the former, as an example, although accuracy (Acc) has been widely used in classification tasks, it is not suitable for some type of problems, such as imbalanced datasets [43] (very different number of patterns for each class), and ordinal datasets [44]. Then, the performance metrics must consider the order of the classes so that errors between adjacent classes should be considered less important than the ones between separated classes in the ordinal scale.…”
Section: Problem Formulationmentioning
confidence: 99%