EUROCON 2005 - The International Conference on "Computer as a Tool" 2005
DOI: 10.1109/eurcon.2005.1630268
|View full text |Cite
|
Sign up to set email alerts
|

Multi-step forecasting using echo state networks

Abstract: Echo State Networks (ESNs) have been recently proposed as a special class of recurrent neural networks (RNNs), which help to avoid the possibility of vanishing gradient associated with RNNs, and also computational less complex. On-line training of ESNs has previously been implemented using an RLS-type algorithm. Our approach aims at avoiding the numerical disadvantages inherent to the RLS algorithm by switching to a simpler and less computationally-intensive gradient descent algorithm. Simulations performed on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2008
2008
2018
2018

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…For online training, this regression can be performed as the reservoir evolves over time via stochastic gradient descent (SGD) as demonstrated in [24]; the output weights are updated iteratively,…”
Section: B Online Training For Esnsmentioning
confidence: 99%
See 1 more Smart Citation
“…For online training, this regression can be performed as the reservoir evolves over time via stochastic gradient descent (SGD) as demonstrated in [24]; the output weights are updated iteratively,…”
Section: B Online Training For Esnsmentioning
confidence: 99%
“…It was shown the SGD-ESN was sufficient to perform multi-step tracking of a 3-D Lorenz system [24]. That said, convergence performance of the SGD approach is not only heavily dependent on η but is also negatively impacted by the eigenvalue spread of the reservoir cross-correlation matrix [20].…”
Section: B Online Training For Esnsmentioning
confidence: 99%
“…As a cheaper O(N ψ ) alternative, Kountoriotis et al [10] proposed updating the output weights directly via stochastic gradient descent (SGD) or equivalently, Least Means Squares (LMS), i.e. :…”
Section: A Online Echo State Networkmentioning
confidence: 99%
“…Unlike current online ESNs trained using recursive least squares (RLS) [9] and stochastic gradient descent (SGD) [10], our Bayesian-based formulation gives predictive distributions instead of point-predictions. One can view the OESGP as an online variant of the recently-proposed echo state gaussian process (ESGP) [11], a combination of the ESN and gaussian process (GP) approaches.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation