Simple recurrent neural networks are widely used in time series prediction. Most researchers and application developers often choose arbitrarily between Elman or Jordan simple recurrent neural networks for their applications. A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used. In this study, we evaluated the performance of these neural networks on three established bench mark time series prediction problems. Results from the experiments showed that Jordan neural network performed significantly better than the others. However, the results indicated satisfactory forecasting performance by the other two neural networks.
This paper addresses the problem of learning a regression model for the prediction of data traffic in a cellular network. We proposed a cooperative learning strategy that involves two Jordan recurrent neural networks (JNNs) trained using the firefly algorithm (FFA) and resilient backpropagation algorithm (Rprop), respectively. While the cooperative capability of the learning process ensures the effectiveness of the regression model, the recurrent nature of the neural networks allows the model to handle temporally evolving data. Experiments were carried out to evaluate the proposed approach using high-speed downlink packet access data demand and throughput measurements collected from different cell sites of a universal mobile telecommunications system-based cellular operator. The proposed model produced significantly superior results compared to the results obtained on the same problems from the traditional method of separately training a JNN with FFA and Rprop.
Artificial neural networks (NNs) are widely used in modeling and forecasting time series. Since most practical time series are non-stationary, NN forecasters are often implemented using recurrent/delayed connections to handle the temporal component of the time varying sequence. These recurrent/delayed connections increase the number of weights required to be optimized during training of the NN. Particle swarm optimization (PSO) is now an established method for training NNs, and was shown in several studies to outperform the classical backpropagation training algorithm. The original PSO was, however, designed for static environments. In dealing with non-stationary data, modified versions of PSOs for optimization in dynamic environments are used. These dynamic PSOs have been successfully used to train NNs on classification problems under non-stationary environments. This paper formulates training of a NN forecaster as dynamic optimization problem to investigate if recurrent/delayed connections are necessary in a NN time series forecaster when a dynamic PSO is used as the training algorithm. Experiments were carried out on eight forecasting problems. For each problem, a feedforward NN (FNN) is trained with a dynamic PSO algorithm and the performance is compared to that obtained from four different types of recurrent NNs (RNN) each trained using gradient descent, a standard PSO for static environments and the dynamic PSO algorithm. The RNNs employed were an Elman NN, a Jordan NN, a multirecurrent NN (MRNN) and a time delay NN (TDNN). The
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.