2006
DOI: 10.1201/9781420013061
|View full text |Cite
|
Sign up to set email alerts
|

Neural Networks for Applied Sciences and Engineering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
79
0
2

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 108 publications
(83 citation statements)
references
References 0 publications
2
79
0
2
Order By: Relevance
“…This a kind of combination of first order and second derivatives of error with a free parameter. It has a good convergence rate compared to others algorithm and has good accuracy performance and suitable for time series forecasting [10], [26].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…This a kind of combination of first order and second derivatives of error with a free parameter. It has a good convergence rate compared to others algorithm and has good accuracy performance and suitable for time series forecasting [10], [26].…”
Section: Methodsmentioning
confidence: 99%
“…This ANN usage for forecasting comes to some considerations: 1) neural network can adapt to data so there is no need to make prior assumption about the data, 2) neural network can do generalization, 3) neural network is a universal approximator [9]; 4) neural network is a nonlinear model that is suitable for exchange rate [10], [11]. The neural network is also known as a good model to explain dynamic time series [12], [13] complex, noisy and partial time series [14].…”
Section: Literature Revi Ewmentioning
confidence: 99%
“…In on-line methods the weights are changed after each presentation of a training pattern. For some problems, this method may yield effective results, especially for problems where data arrives in real time, (Samarasinghe, 2006). Using on-line training it is possible to reduce training times significantly.…”
Section: 3training a Wavelet Network With Back-propagationmentioning
confidence: 99%
“…Moreover a bad regularization parameter,  , can severely restrict the growth of weights and as result the network will be under-fitted, (Samarasinghe, 2006). Finally in pruning methods the significance of each weight usually is not measured in a statistical way, (Anders & Korn, 1999).…”
Section: Model Selectionmentioning
confidence: 99%
See 1 more Smart Citation