The trend of Reservoir Computing (RC) has been gaining prominence in the Neural Computation community since the 2000s. In a RC model there are at least two welldifferentiated structures. One is a recurrent part called reservoir, which expands the input data and historical information into a high-dimensional space. This projection is carried out in order to enhance the linear separability of the input data. Another part is a memory-less structure designed to be robust and fast in the learning process. RC models are an alternative of Turing Machines and Recurrent Neural Networks to model cognitive processing in the neural system. Additionally, they are interesting Machine Learning tools to Time Series Modeling and Forecasting.Recently a new RC model was introduced under the name of Echo State Queueing Networks (ESQN). In this model the reservoir is a dynamical system which arises from the Queueing Theory. The initialization of the reservoir parameters may influence the model performance. Recently, some unsupervised techniques were used to improve the performance of one specific RC method. In this paper, we apply these techniques to set the reservoir parameters of the ESQN model. In particular, we study the ESQN model initialization using Self-Organizing Maps. Additionally, we test the model performance initializing the reservoir employing Hebbian rules. We present an empirical comparison of these reservoir initializations using a range of time series benchmarks.
I NTRODUCTI ONA Recurrent Neural Network (RNN) is a class of Neural Network where connections among neurons can form circuits. This creates an internal state, which makes the RNN a discrete time state-space model. The internal states are used to process arbitrary sequences of input data as well as to retain information about the past. Even though the RNN model has been proven to be a powerful tool for time series modeling [8,10,21,24], in practice it is hard to train it [7,25]. The main problems are: the convergence of the algorithm is not guaranteed using the gradient descent method due to the vanishing and exploding gradient problem can be presented [7,25].At the beginning of the 2000s, a new approach for designing and training RNNs has been introduced with the Echo State Networks (ESN) [15] and Liquid State Machines (LSM) [22]. Since 2007, this new trend became collectively known under the name of Reservoir Computing (RC) [31]. The idea overcome the main limitations regarding the RNN training while introduces no significant drawbacks. A RC model uses a dynamical system, called the reservoir to expand the input data in a larger space. The reservoir is characterized by a large and randomly connected recurrent network. A main characteristic of the RC models is the conceptual separation between the adjustable and non-adjustable parameters in the learning process [21]. The reservoir parameters are deemed fixed during the training steps. Only the weights not involved in recurrences are updated in the training algorithm. The model outputs are computed using a m...