Generalisation has been a major issue in RAM-based neural networks. In PRAM networks generalisation is produced by noisy reinforcement learning; a completely hardware implementable (built-in) algorithm. This paper presents the first part of a modular technique t o analyse the formation of the basins of attraction i n such systems. It proves that reinforcement learning i n a single P R A M site is a globally stable system i n the continuous limit of incremental learning. It also shows how the stable state depends on the penalty/reward ratio and on the learning rate. The evolution of learning in the tame domain shows the effects of the initial state and of the halting moment in the final state. The paper ends with considerations on how noise contributes to the formation of basins of attraction i n P R A M neurons.