Abstract. This paper proposes a new framework, referred to as Recurrent Bayesian Genetic Programming (rbGP), to sustain steady convergence in Genetic Programming (GP) (i.e., to prevent premature convergence) and effectively improves its ability to find superior solutions that generalise well. The term 'Recurrent' is borrowed from the taxonomy of Neural Networks (NN), in which a Recurrent NN (RNN) is a special type of network that uses a feedback loop, usually to account for temporal information embedded in the sequence of data points presented to the network. Unlike RNN, our algorithm's temporal dimension pertains to the sequential nature of the evolutionary process itself, and not to the data sampled from the problem solution space. rbGP introduces an intermediate generation between each subsequent generation in order to collect information about the offspring's fitness distribution of each parent. Placing the collected information into a Bayesian model, rbGP predicts the probability of any individual to produce offspring fitter than its parent. This predicted probability (calculated by the Bayesian model) is used by the tournament selection instead of the original fitness value. Empirical evidence, from 13 problems, against canonical GP, demonstrates that rbGP preserves generalisation in most cases.