2012
DOI: 10.1016/j.camwa.2012.01.042
|View full text |Cite
|
Sign up to set email alerts
|

Novel weighting in single hidden layer feedforward neural networks for data classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…Note that each rule of the system analyzes only one metric. In case when there is more than one indicator, the set of input fuzzy concepts in the rule parcel is combined by the fuzzy "I" operation (Luckin (2017); Seifollahi et al (2012):…”
mentioning
confidence: 99%
“…Note that each rule of the system analyzes only one metric. In case when there is more than one indicator, the set of input fuzzy concepts in the rule parcel is combined by the fuzzy "I" operation (Luckin (2017); Seifollahi et al (2012):…”
mentioning
confidence: 99%
“…In the case of an NN classifier, the number of hidden layers is also a relevant that decisively conditions the final model and its results. In this work, the model NN classifier has been created with only one hidden layer because of two reasons: fi is the most frequent value in the literature for binary classifiers in terms of time comp [72,73,74], and second, because the size and complexity of the dataset were small en In part 'a', results from with 1 to 50 estimators are shown, which allows us to evaluate performance at large scale, every 10 estimators. The resulting accuracy curve fluctuates (Figure 6a), not improving as the number of estimators increases.…”
Section: Neural Network Classifiermentioning
confidence: 99%
“…In the case of an NN classifier, the number of hidden layers is also a relevant factor that decisively conditions the final model and its results. In this work, the model of the NN classifier has been created with only one hidden layer because of two reasons: first, it is the most frequent value in the literature for binary classifiers in terms of time complexity [72][73][74], and second, because the size and complexity of the dataset were small enough to overcome any overfitting problem with only one hidden layer. Following the same strategy we adopted in the case of the RF classifier, a curve was plotted to compare the accuracy depending on the number of neurons used in the hidden layer to be able to select the number of neurons to obtain the best results with our dataset.…”
Section: Neural Network Classifiermentioning
confidence: 99%
“…However, the application of fuzzy inference in RBFN was only tested in classification problem, and the setting of the fuzzy rule is a challenging task for regular researchers that do not have fuzzy theory background. In contrary, the used of statistical method in obtaining suitable weight for RBFN were reported in numerous literatures [45][46][47][48], that uses stochastic method, Grover searching algorithm, Hierarchy Markovian matrix and attributed class correlation method for that purposes. Furthermore, it is also reported that the learning algorithm for networks training may perform worse with the increases of dataset [49].…”
Section: Related Workmentioning
confidence: 99%