In the current paper we present an integrated genetic programming environment, called java GP Modelling. The java GP Modelling environment is an implementation of the steady-state genetic programming algorithm. That algorithm evolves tree based structures that represent models of inputoutput relation of a system. The motivation of this paper is to compare the GP algorithm with neural network architectures when applied to the task of forecasting and trading the ASE 20 Greek Index using only autoregressive terms as inputs. This is done by benchmarking the forecasting performance of the GP algorithm and 6 different ARMA-Neural Network combination designs representing a Hybrid, Mixed Higher Order Neural Network (HONN), a Hybrid, Mixed Recurrent Network (RNN), a Hybrid, Mixed classic Multilayer Perceptron (MLP) with some traditional techniques, either statistical such as a an autoregressive moving average model (ARMA), or technical such as a moving average convergence/divergence model (MACD), plus a naïve trading strategy. More specifically, the trading performance of all models is investigated in a forecast and trading simulation on ASE 20 time series closing prices over the period 2001-2008 using the last one and a half years for out-of-sample testing. We use the ASE 20 daily series as many financial institutions are ready to trade at this level and it is therefore possible to leave orders with a bank for business to be transacted on that basis.As it turns out, the GP model does remarkably well and outperforms all other models in a simple trading simulation exercise. This is also the case when more sophisticated trading strategies using confirmation filters and leverage are applied, as the GP model still produces better results and outperforms all other neural network and traditional statistical models in terms of annualised return.