2011
DOI: 10.1016/j.ijforecast.2009.05.029
|View full text |Cite
|
Sign up to set email alerts
|

MLP ensembles improve long term prediction accuracy over single networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
20
0
1

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 49 publications
(22 citation statements)
references
References 11 publications
1
20
0
1
Order By: Relevance
“…The ordering phase usually requires setting high learning rate, The simple ANN mode was referred to as "ANN I" and the modified version as "ANN II", as depicted earlier in Figure 1. For both modes, input data were divided randomly into a training, validation and test subsets in the following fashion 70%, 15% and 15% respectively, for weight learning, over-fitting prevention and performance validation [1]. The network was trained using the Levenberg Marquardt backpropagation algorithm, which is known as one of the most effecient ANN training algorithem [17] [2].…”
Section: Artificial Neural Network Techniquesmentioning
confidence: 99%
“…The ordering phase usually requires setting high learning rate, The simple ANN mode was referred to as "ANN I" and the modified version as "ANN II", as depicted earlier in Figure 1. For both modes, input data were divided randomly into a training, validation and test subsets in the following fashion 70%, 15% and 15% respectively, for weight learning, over-fitting prevention and performance validation [1]. The network was trained using the Levenberg Marquardt backpropagation algorithm, which is known as one of the most effecient ANN training algorithem [17] [2].…”
Section: Artificial Neural Network Techniquesmentioning
confidence: 99%
“…In ensemble construction, once the characteristics of the models are established, variety is usually introduced by modifying initial random weights ( [20]), or by randomising training samples ( [7], [21]). The randomisation of the feature space ( [9]) could count both as a strategy for model variation and as a strategy for model specification.…”
Section: Generatementioning
confidence: 99%
“…The methods that have been proposed in the literature (focusing on forecasting rather than on classification) include: a gating network ( [22]), a simple or weighted average ( [8], [12], [23], [24]), a nonlinear average through another NN ( [23]), a feed-forward NN ( [25], [26]), a Radial Basis Function ( [11]), the median of forecasts ( [10], [20]) and the mode ( [19]). It can be seen that the complexity and effectiveness of the ensembling approaches are only partially related to the combining procedure at the end of the process, as there are other steps involved.…”
Section: Generatementioning
confidence: 99%
“…In this work, the ensemble consisting of 31 Logistic Regression models has reduced the system's variance and their median was taken as the response for each test example. This median approach had been adopted by the authors' teams since 2007 in PAKDD Data Mining Competition [12] and in NN3 Time Series Forecasting Competition [13]. As already stated, the modeling technique chosen was Logistic Regression due to its explicit coefficients and for not having the need of a validation set.…”
Section: Logistic Regression and Model Ensemblementioning
confidence: 99%