The use of programmable devices leads to flexible and area-efficient implementation of biologically plausible neural entities such as synapses and neurons. However, the area constraints of reconfigurable devices such as the Field Programmable Gate Arrays (FPGAs) limit their use to rather small and already trained neural networks. This paper investigates and describes area-efficient spiking neural building blocks to implement integrate-and-fire (IF) and leaky integrate-and-fire (LIF) neuron models on reconfigurable hardware. It is demonstrated that an abstract behaviour of spiking neurons can be emulated for bit accurate implementation. In a linear comparison, 2×10 3 synapses and 1.2×10 3 fully parallel artificial spiking neurons can be implemented with the proposed building blocks and architectures. By using bigger devices with more logic elements such as Virtex-5, it is possible to fit almost 4.3×10 3 synapses and 2.5×10 3 neurons. The main contributions of this paper include area-efficient fully parallel architectures that exclude the use of embedded multipliers, area-efficient architecture for a leaky membrane, and compact implementation of neural cell which can be used for emulating large scale fully parallel spiking neural networks.