A binarized neural network (BNN) inference accelerator is designed in which weights are stores in loadless four-transistor static random access memory (4T SRAM) cells. A time-multiplexed exclusive NOR (XNOR) multiplier with switched capacitors is proposed which prevents the loadless 4T SRAM cell from being destroyed in the operation. An accumulator with current sensing scheme is also proposed to make the multiply-accumulate operation (MAC) completely linear and read-disturb free. The BNN inference accelerator is applied to the MNIST dataset recognition problem with accuracy of 96.2% for 500 data and the throughput, the energy efficiency and the area efficiency are confirmed to be 15.50TOPS, 72.17TOPS/W and 50.13TOPS/mm 2 , respectively, by HSPICE simulation in 32nm technology. Compared with the conventional SRAM cell based BNN inference accelerators which are scaled to 32nm technology, the synapse cell size is reduced to less than 16% (0.235m 2 ) and the cell efficiency (synapse array area/synapse array plus peripheral circuits) is 73.27% which is equivalent to the state-of-the-art of the SRAM cell based BNN accelerators.