2014
DOI: 10.1016/j.neucom.2013.03.026
|View full text |Cite
|
Sign up to set email alerts
|

Border Pairs Method—constructive MLP learning classification algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…Also, four models with 20 different types of architecture were examined, which are fully listed in Table 3. To find the optimum architecture for ANN models, including maximum number of hidden layers and neurons, the experts' knowledge and the trial and error method are used [24]. Before performing the classification using neural network, first the relationship between independent variables (minimum temperature, average temperature, maximum temperature, and RH) with the dependent variable (frequency of positive cases) was investigated.…”
Section: Neural Networkmentioning
confidence: 99%
“…Also, four models with 20 different types of architecture were examined, which are fully listed in Table 3. To find the optimum architecture for ANN models, including maximum number of hidden layers and neurons, the experts' knowledge and the trial and error method are used [24]. Before performing the classification using neural network, first the relationship between independent variables (minimum temperature, average temperature, maximum temperature, and RH) with the dependent variable (frequency of positive cases) was investigated.…”
Section: Neural Networkmentioning
confidence: 99%
“…to the high degree of connectivity between the nodes and to the increased nonlinearity of this neural network, its generalization ability is among the best, being able to deal even with noisy and missing data. More recently, paradigms like deep learning offered the MLP an "upgrade" through the Border Pairs Method [40], thus considerably improving its performances. While increasing the number of hidden layers is likely to lead to the improvement of overall performances, potentially revealing key features embedded in the data, adding too many of them tends to dramatically augment the training time.…”
Section: Jinst 11 C12004mentioning
confidence: 99%
“…The EBP algorithm offered a tool to determine the value of the parameters, while the determination of the optimal number of neurons is still an open problem. Successively, methods have been presented to address both questions [3][4][5][6][7][8], but the EBP paradigm, with an important series of variations, still represents the standard of machine learning. This paradigm consists in finding the minimum of a loss function, which is typically given by the output mean squared error with respect to the target value.…”
Section: Introductionmentioning
confidence: 99%