2012
DOI: 10.1007/978-3-642-35289-8_36
|View full text |Cite
|
Sign up to set email alerts
|

A Practical Guide to Applying Echo State Networks

Abstract: Abstract. Reservoir computing has emerged in the last decade as an alternative to gradient descent methods for training recurrent neural networks. Echo State Network (ESN) is one of the key reservoir computing "flavors". While being practical, conceptually simple, and easy to implement, ESNs require some experience and insight to achieve the hailed good performance in many tasks. Here we present practical techniques and recommendations for successfully applying ESNs, as well as some more advanced application-s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

6
641
1
18

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 649 publications
(666 citation statements)
references
References 38 publications
6
641
1
18
Order By: Relevance
“…In order to evaluate the performance of the memristor based ESN with CNN structure (MCNN ESN), 10 running results are obtained for the proposed ESN with memristive CNN structure, the original ESN structure with memristive connections and the original ESN as shown in Table 2 using Python 2.7, Oger toolbox 1.1.3 and script developed by Mantas [21]. The results are measured using meansquared error (MSE) as shown in (25) which computes the differences between the predicted results and the test set of Mackey-Glass dataset.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In order to evaluate the performance of the memristor based ESN with CNN structure (MCNN ESN), 10 running results are obtained for the proposed ESN with memristive CNN structure, the original ESN structure with memristive connections and the original ESN as shown in Table 2 using Python 2.7, Oger toolbox 1.1.3 and script developed by Mantas [21]. The results are measured using meansquared error (MSE) as shown in (25) which computes the differences between the predicted results and the test set of Mackey-Glass dataset.…”
Section: Resultsmentioning
confidence: 99%
“…However, in some trials, the readout weights of the memristive reservoir is obviously greater than the original ESN's for example in the range ðÀ6; 6Þ. According to the practical guide [21], large output weights W out may imply that the solution is sensitive and unstable because a tiny difference will be amplified by the output weights and lead to large deviations from the expected values. Therefore, the average performance of proposed ESN is slightly sensitive than the original ESN.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This is achieved by means of a random process characterized by four control parameters (see [14] for more details). These parameters are: (1) α U , the maximal absolute eigenvalue of the input weight matrix W in , (2) ρ, the maximal absolute eigenvalue of the recurrent weight matrix W rec , (3) K in , the number of inputs driving each reservoir neuron and (4) K rec , the number of delayed reservoir outputs driving each reservoir neuron.…”
Section: Reservoir Computing (Rc)mentioning
confidence: 99%
“…= −τ f r / ln(1 − λ) as the leaky integration time constant and T as the expected state duration, we select (ρ, τ λ , K in , K rec ) = (0.8, T, 10, 10). The parameter α U is chosen so that the average variance of the resrevoir outputs reach a certain level [14]. Since layers 2 and 3 see basically the same inputs, α U is taken the same for both layers.…”
Section: Reservoir Component Setupmentioning
confidence: 99%