2014
DOI: 10.14311/nnw.2014.24.034
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Neural Network-Particle Swarm Algorithm to Describe Chaotic Time Series

Abstract: This tutorial is based on modification of the professor nomination lecture presented two years ago in front of the Scientific Council of the Czech Technical University in Prague [16].It is devoted to the techniques for the models developing suitable for processes forecasting in complex systems. Because of the high sensitivity of the processes to the initial conditions and, consequently, due to our limited possibilities to forecast the processes for the long-term horizon, the attention is focused on the techniq… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
28
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 12 publications
(28 citation statements)
references
References 26 publications
(53 reference statements)
0
28
0
Order By: Relevance
“…These statistical parameters are calculated as follows: MAE=1Ni=1N||YcalcYreali normalR=i=1N()YcalcfalseY¯calc()YrealfalseY¯realii=1N()YcalcfalseY¯calci2i=1N()YrealfalseY¯reali2 During this entire operation, the implementation of backpropagation learning algorithm updates the network weights and biases to the direction in which the merit function decreases most rapidly, i.e., the negative gradient [ Haykin , ]. Note that our proposed ANN was trained for minimizing the RMSE (objective function) but substituting the backpropagation learning algorithm [ Rumelhart and McClelland , ] with a particle swarm algorithm (PSO) [ Kennedy and Eberhart , ] to optimize the weight updates in the ANN [ Lazzús et al , ].…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…These statistical parameters are calculated as follows: MAE=1Ni=1N||YcalcYreali normalR=i=1N()YcalcfalseY¯calc()YrealfalseY¯realii=1N()YcalcfalseY¯calci2i=1N()YrealfalseY¯reali2 During this entire operation, the implementation of backpropagation learning algorithm updates the network weights and biases to the direction in which the merit function decreases most rapidly, i.e., the negative gradient [ Haykin , ]. Note that our proposed ANN was trained for minimizing the RMSE (objective function) but substituting the backpropagation learning algorithm [ Rumelhart and McClelland , ] with a particle swarm algorithm (PSO) [ Kennedy and Eberhart , ] to optimize the weight updates in the ANN [ Lazzús et al , ].…”
Section: Methodsmentioning
confidence: 99%
“…Thus, PSO is a population‐based algorithm initialized with a random particle population using a search strategy based on updating generations [ Lazzús et al , ]. For each iteration, the algorithm calculates each particle velocity j as follows [ Eberhart and Shi , ]: vjk+1=ωvjk+c1r1()ψjksjk+c2r2()ψgksjk, where s represents the particle position and v the particle velocity in the search space, ω is the swarm inertia weight, c 1 and c 2 are two acceleration constants, r 1 and r 2 two elements each obtained from a random value in the range [0,1], and k is the current iteration; sjk is the current particle position, ψjk and ψ g denote the best solution that the particle has reached, and the best solution that all particles have reached, respectively [ Lazzús et al , ]. Note that the particle velocity value can be fitted into the range [ vmax,vmax] in order to restrict the particles roaming outside the search space [ Clerc and Kennedy , ].…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations