2022
DOI: 10.1109/access.2022.3158755
|View full text |Cite
|
Sign up to set email alerts
|

Computational Efficiency of Multi-Step Learning Echo State Networks for Nonlinear Time Series Prediction

Abstract: The echo state network (ESN) is a representative model for reservoir computing, which has been mainly used for temporal pattern recognition. Recent studies have shown that multi-reservoir ESN models constructed with multiple reservoirs can enhance the potential of the ESN-based approach. In the present study, we investigate computational performance and efficiency of the multi-step learning ESN which is one of the multi-reservoir ESN models and characterized by step-by-step learning processes. We show that the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 26 publications
(40 reference statements)
1
3
0
Order By: Relevance
“…We show the results in performance in Table 1. These test scores are in line with our previous observations, as both DDNs perform consistently better than their respective baselines, but also similarly or better compared what is reported in several recent novel ESN implementations [5,17,1].…”
Section: Resultssupporting
confidence: 91%
“…We show the results in performance in Table 1. These test scores are in line with our previous observations, as both DDNs perform consistently better than their respective baselines, but also similarly or better compared what is reported in several recent novel ESN implementations [5,17,1].…”
Section: Resultssupporting
confidence: 91%
“…This is crucial for understanding the temporal dynamics in time series data samples. The self-attention mechanism is formalized via equation 10, π΄π‘‘π‘‘π‘’π‘›π‘‘π‘–π‘œπ‘›(𝑄, 𝐾, 𝑉) = π‘ π‘œπ‘“π‘‘π‘šπ‘Žπ‘₯ ( 𝑄𝐾 𝑇 π’…π’Œ ) 𝑉 … (10) Where, Q, K, and V represent the queries, keys, and values matrices, respectively, derived from the input data, and dk is the dimensionality of the keys. This equation ensures that each output element is a weighted sum of the values, with weights computed based on the input's relevance.…”
Section: 𝐿 = βˆ’πΈπ‘ž( 𝑍 | 𝑋 𝐴 )[π‘™π‘œπ‘”π‘( 𝐴 | 𝑍 )] + 𝐾𝐿[ π‘ž( 𝑍 | 𝑋 𝐴 ) | | 𝑝(𝑍...mentioning
confidence: 99%
“…Then, in the reservoir layer, input time series data is transformed nonlinearly to high dimensional space as reservoir states X i (k) at given node i (i = 1, 2, …, N). 1,31) For the output layer, the readout weight w i connecting X i (k) and output y i (k) is trained by linear regression to obtain the desired output. The reservoir output y(k) is described as a linear combination of X i (k) and the w i , as follows,…”
mentioning
confidence: 99%
“…The function of the reservoir in reservoir computing is to map the input to a high-dimensional feature space. 1,31) The reservoir's mapping function f i for the input u i and reservoir state X i is shown in Fig. 1(a).…”
mentioning
confidence: 99%
See 1 more Smart Citation