The Echo State Network (ESN) is a unique type of recurrent neural network. It is built atop a reservoir, which is a sparse, random, and enormous hidden infrastructure. ESN has been successful in dealing with a variety of non-linear issues, including prediction and classification. ESN is utilized in a variety of architectures, including the recently proposed Multi-Layer (ML) architecture. Furthermore, Deep Echo State Network (DeepESN) models, which are multi-layer ESN models, have recently been proved to be successful at predicting high-dimensional complicated non-linear processes. The proper configuration of DeepESN architectures and training parameters is a time-consuming and difficult undertaking. To achieve the lowest learning error, a variety of parameters (hidden neurons, input scaling, the number of layers, and spectral radius) are carefully adjusted. However, the optimum training results may not be guaranteed by this haphazardly created work. The grey wolf optimization (GWO) algorithm is introduced in this study to address these concerns. The DeepESN based on GWO (GWODESN) is utilized in trials to forecast time series, and therefore the results are compared with the regular ESN, LSTM, and ELM models. The findings indicate that the planned model performs the best in terms of prediction.