1995
DOI: 10.1002/for.3980140405
|View full text |Cite
|
Sign up to set email alerts
|

Back propagation in time‐series forecasting

Abstract: One of the major constraints on the use of back propagation neural networks as a practical forecasting tool is the number of training patterns needed. We propose a methodology that reduces the data requirements. The general idea is to use the Box‐Jenkins model in an exploratory phase to identify the 'lag components' of the series, to determine a compact network structure with one input unit for each lag, and then apply the validation procedure. This process minimizes the size of the network and consequently th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
50
0
7

Year Published

1998
1998
2015
2015

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 140 publications
(59 citation statements)
references
References 17 publications
2
50
0
7
Order By: Relevance
“…Most uses of ANN in economics have so far been in ®nancial markets, in part because traditional approaches have had low explanatory power and in part because the ANN approach requires abundant data. ANN models have outperformed the traditional time series models in most cases in forecasting stock prices and exchange rates, or in classifying applications such as bond ratings (Ahmadi, 1993;Bosarge, 1993;Kamijo and Tanigavwa, 1993;Sharda and Patil, 1993;Refenes, 1993;Donaldson and Kamstra, 1994;Lachtermacher and Fuller, 1995).…”
Section: Introductionmentioning
confidence: 98%
“…Most uses of ANN in economics have so far been in ®nancial markets, in part because traditional approaches have had low explanatory power and in part because the ANN approach requires abundant data. ANN models have outperformed the traditional time series models in most cases in forecasting stock prices and exchange rates, or in classifying applications such as bond ratings (Ahmadi, 1993;Bosarge, 1993;Kamijo and Tanigavwa, 1993;Sharda and Patil, 1993;Refenes, 1993;Donaldson and Kamstra, 1994;Lachtermacher and Fuller, 1995).…”
Section: Introductionmentioning
confidence: 98%
“…There are no systematic reports on the decision of input nodes. Lachtermacher and Fuller (1995) observed both undesirable effects of more input nodes for one-step ahead forecasting and good effects for multistep prediction. They also found that correct identification of the number of input nodes was more important than the selection of the number of hidden nodes.…”
Section: Aic (Akaike's Information Criterion)mentioning
confidence: 91%
“…Finally, the optimal size of the hidden layer is determined by a search of the possible structures. This search begins with a certain number of nodes, which are the minimum value of a calculated hidden node number based on several rules of thumb that are available in the literature [22][23][24]; in this manner, the model is trained and analyzed. Each time, the number of hidden nodes is increased by one.…”
Section: Verificationmentioning
confidence: 99%