2007
DOI: 10.1007/978-3-540-74690-4_63
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Echo State Networks with Simple Recurrent Networks and Variable-Length Markov Models on Symbolic Sequences

Abstract: Abstract.A lot of attention is now being focused on connectionist models known under the name "reservoir computing". The most prominent example of these approaches is a recurrent neural network architecture called an echo state network (ESN). ESNs were successfully applied in more real-valued time series modeling tasks and performed exceptionally well. Also using ESNs for processing symbolic sequences seems to be attractive. In this work we experimentally support the claim that the state space of ESN is organi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…However, training of LSTM as an RNN-based model is reported to be inherently difficult (Lukoševičius, 2012). As a consequent, in this paper, we have proposed ESN architecture as a new powerful approach in RNN research, where, instead of difficult learning process, it based on the property of untrained randomly initialized RNN (Čerňanský and Tiňo, 2007). We shall see in section 5 that the ESN can significantly reduce the training time and achieve results comparable to the LSTM.…”
Section: Literature Reviewmentioning
confidence: 98%
“…However, training of LSTM as an RNN-based model is reported to be inherently difficult (Lukoševičius, 2012). As a consequent, in this paper, we have proposed ESN architecture as a new powerful approach in RNN research, where, instead of difficult learning process, it based on the property of untrained randomly initialized RNN (Čerňanský and Tiňo, 2007). We shall see in section 5 that the ESN can significantly reduce the training time and achieve results comparable to the LSTM.…”
Section: Literature Reviewmentioning
confidence: 98%
“…Methods used for the classification span from distance measures (e.g., Dynamic Time Warping) [11,12] and statistical models (e.g., Hidden Markov Models) [13,14], to artificial neural architectures (e.g., Recurrent Neural Networks) [15][16][17][18][19] and hybrid solutions [20]. These methods vary in complexity and adaptability, with Recurrent Neural Networks being one of the most promising direction in the field [21]. Adaptation of RNNs though, is known to have high computational complexity.…”
Section: Introductionmentioning
confidence: 99%
“…These results were repeated by Tong et al [4] who compared SRNs with a recent popular model, echo-state networks (ESN), introduced by Jaeger [5]. It has been shown that ESN performance is similar to common statistical methods (variable-length Markov models) while a well-trained SRN can demonstrate superior prediction abilities [6]. 1 What all these modeling approaches share is their focus on natural language processing as an independent domain.…”
Section: Introductionmentioning
confidence: 99%