Abstract-We present an approach for selecting optimal parameters for the pipelined recurrent neural network (PRNN) in the paradigm of nonlinear and nonstationary signal prediction. Although there has recently been progress in terms of algorithms for training the PRNN, no account has been made of some inherent features of the PRNN. We therefore provide a study of the role of nesting, which is inherent to the PRNN architecture. The corresponding number of nested modules needed for a certain prediction task, and their contribution toward the final prediction gain (PG) give a thorough insight into the way the PRNN performs, and offers solutions for optimization of its parameters. In particular, nesting, which is a contractive function by its nature, allows the forgetting factor in the cost function of the PRNN to exceed unity, hence it becoms an emphasis factor. This compensates for the small contribution of the distant modules to the prediction process, due to nesting, and helps to circumvent the problem of vanishing gradient, experienced in RNN's for prediction. The PRNN, with its parameters chosen based upon the established criteria, is shown to outperform the linear least mean square (LMS) and recursive least squares (RLS) predictors, as well as previously proposed PRNN schemes, at no expense of additional computational complexity.