2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8851876
|View full text |Cite
|
Sign up to set email alerts
|

Analysis on Characteristics of Multi-Step Learning Echo State Networks for Nonlinear Time Series Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
16
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 12 publications
(19 citation statements)
references
References 22 publications
3
16
0
Order By: Relevance
“…7(a) (see Supplementary Figure 1 for the RMSE and MAE). This is because the multi-step learning ESN is suited for approximating dynamical systems with strong nonlinearity rather than those with long-term memory as suggested in our previous study [16]. The computational efficiency of the multi-step learning ESN is confirmed in Fig.…”
Section: B the Narma Modelsupporting
confidence: 69%
See 3 more Smart Citations
“…7(a) (see Supplementary Figure 1 for the RMSE and MAE). This is because the multi-step learning ESN is suited for approximating dynamical systems with strong nonlinearity rather than those with long-term memory as suggested in our previous study [16]. The computational efficiency of the multi-step learning ESN is confirmed in Fig.…”
Section: B the Narma Modelsupporting
confidence: 69%
“…The multi-step learning ESN is one of the multi-reservoir ESN models, which consists of multiple ESN modules and additional connections as illustrated in Fig. 2 [16]. The Nstep learning ESN contains N ESN modules, each including an input layer, a reservoir, and a readout.…”
Section: B Multi-step Learning Echo State Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…In contrast, recent connectomics analysis has revealed several important structural features in brain networks, including the modular organization [4]. Studies on modular-reservoir networks have reported that performance and robustness can be increased compared to randomly-connected reservoirs [5][6][7]. For example, Klampfl et al showed that modular structures that code specific inputs self-organize in a reservoir network when STDP is implemented in synapses within the reservoir and that the computational capability of linear readout neurons was enhanced after the STDP-based tuning of the synaptic weights [6].…”
Section: Introductionmentioning
confidence: 98%