1996
DOI: 10.1016/0925-2312(95)00137-9
|View full text |Cite
|
Sign up to set email alerts
|

Much ado about nothing? Exchange rate forecasting: Neural networks vs. linear models using monthly and weekly data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0
7

Year Published

2002
2002
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 105 publications
(46 citation statements)
references
References 14 publications
0
39
0
7
Order By: Relevance
“…They find the GRNN superior to all other models regarding forecasting accuracy. Hann and Steurer (1996), however, can only confirm a superior performance of Neural Networks for weekly data, but conclude that for their investigation of the USD/DEM exchange rate for data from January 1986 to October 1994 linear models and Neural Networks in the framework of error-correction models give almost the same results for monthly data. look into the impact of parameter settings of Neural Networks, applying them to the daily and weekly GBP/ USD exchange rate from the beginning of 1976 to the end of 1993.…”
Section: Literature Reviewmentioning
confidence: 79%
See 1 more Smart Citation
“…They find the GRNN superior to all other models regarding forecasting accuracy. Hann and Steurer (1996), however, can only confirm a superior performance of Neural Networks for weekly data, but conclude that for their investigation of the USD/DEM exchange rate for data from January 1986 to October 1994 linear models and Neural Networks in the framework of error-correction models give almost the same results for monthly data. look into the impact of parameter settings of Neural Networks, applying them to the daily and weekly GBP/ USD exchange rate from the beginning of 1976 to the end of 1993.…”
Section: Literature Reviewmentioning
confidence: 79%
“…One method to determine the optimal number of hidden nodes was introduced by Le Cun et al (1990) and is known as Optimal Brain Damage: the idea is to start with an oversized model and to gradually prune redundant weights during the training procedure. Other methods are, for instance, the weight decay or weight elimination, using a penalty term for an increased network size that is added to the error function, a neural network information criterion assessing the trade-off between the accuracy of the approximation and the size of the network or a Principal Component Analysis (Hann and Steurer, 1996).…”
Section: Network Characteristicsmentioning
confidence: 99%
“…The measurement U of THEIL which is also known as NMSE [8] determine the relationship between the model performances compared to a random walk model. U of THEIL is described by the equation: (6) if U of THEIL is equal to one, the tested model has the same performance of a random walk.…”
Section: U Of Theil or Nmse -Normalized Mean Square Errormentioning
confidence: 99%
“…Thus, a way to evaluate the model regarding a random walk model is using the Normalized Mean Squared Error (NMSE) or U of Theil Statistic (THEIL) [70], which associates the model performance with a random walk model, and given by (63) where, if the THEIL is equal to 1, the predictor has the same performance than a random model. If the THEIL is greater than 1, then the predictor has a performance worse than a random walk model, and if the THEIL is less than 1, the predictor is better than a random walk model.…”
Section: Performance Metricsmentioning
confidence: 99%