2016
DOI: 10.1016/j.patrec.2016.05.002
|View full text |Cite
|
Sign up to set email alerts
|

Design of self-adaptive and equilibrium differential evolution optimized radial basis function neural network classifier for imputed database

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 41 publications
0
6
0
Order By: Relevance
“…The parameters of multi-layer perceptron (MLP) along with training algorithms and Simple Logistic are defined as prescribed in [3].…”
Section: Description Of Datasets and Parametersmentioning
confidence: 99%
See 2 more Smart Citations
“…The parameters of multi-layer perceptron (MLP) along with training algorithms and Simple Logistic are defined as prescribed in [3].…”
Section: Description Of Datasets and Parametersmentioning
confidence: 99%
“…Therefore, to derive novel and useful results for the decision maker, the process of imputing and identifying missing values and relevant features, respectively are highly recommended. Since decades ago these two problems are treated as the problem of importance in object detection & recognition (pattern recognition) [11] and data mining [3] in general and ECG signals diagnosis [13], power flow calculation [14], simulation and control of dynamic system [15], magnetic modeling [16], identification and classification of plant leaf diseases [17], discrimination of low and full fat Yogurts [19,20,22] in specific.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The output weight has not been considered in the feature extraction process by using the proposed algorithm [7]. Dash et al used differential evolution (DE) to optimize RBFNN by adaptively controlling the hidden parameter in the hidden layers [8]. This method, nevertheless, lacks interpretability in the hidden layer of RBFNN [8].…”
Section: Introductionmentioning
confidence: 99%
“…Dash et al used differential evolution (DE) to optimize RBFNN by adaptively controlling the hidden parameter in the hidden layers [8]. This method, nevertheless, lacks interpretability in the hidden layer of RBFNN [8]. Yang and Ma [9] tried to apply the Sparse Neural Network (SNN) algorithm to optimize the hidden neuron number.…”
Section: Introductionmentioning
confidence: 99%