Abstract.A systolic array of dedicated processing elements (PEs) is presented as the heart of a multi-model neural-network accelerator. The instruction set of the PEs makes possible to implement several widely-used neural models, including multi-layer Perceptrons with the back-propagation learning rule and Kohonen feature maps. Each PE holds an element of the synaptic weight matrix. An instantaneous swapping mechanism for the weight matrix makes the efficient implementation of neural networks larger than the physical PE array possible. A systolicallyflowing instruction accompanies each input vector propagating in the array. This avoids the need of emptying and refilling the array when the operating mode of the array is changed. Fixed point arithmetic is used in the PE. The problem of optimally scaling real variables in fixed-point format is addressed.Both the GENES IV chip, containing a matrix of 2 • 2 PEs, and an auxiliary arithmetic circuit have been manufactured and successfully tested. The MANTRA I machine has been built around these chips. Peak performances of the full system are between 200 and 400 MCPS in the evaluation phase and between 100 and 200 MCUPS during the learning phase (depending on the algorithm being implemented).