in graphics processing units (GPUs) or central processing units (CPU) remains extremely challenging. On the contrary, the brain promises very high cognitive capacity while preserving exceptional energy efficiency. One of the main differences between GPUs or CPUs and the brain is memory management. On the one hand, a physical separation exists in GPUs and CPUs between arithmetic and storage units, which is at the origin of enormous energy consumption associated with data transfer between both units. [1] This trend is particularly exacerbated for Artificial Neural Networks (ANNs), which require a very large amount of memory access. On the other hand, the biological neurons and synapses are close to each other in the brain. Accordingly, developing non-von-Neumann architectures to perform in or near-memory computing with non-volatile memory (NVM) technologies is one of the most promising strategies to improve the energetic efficiency of artificial intelligence. [2] Another relevant difference between artificial processors and the brain is the way information is coded. On the one hand, GPUs and CPUs rely on high-precision floating-point outputs. On the other hand, binarized sparse and asynchronous spikes are used to communicate in the brain. In particular, a neuron receives through Single memristor crossbar arrays are a very promising approach to reduce the power consumption of deep learning accelerators. In parallel, the emerging bio-inspired spiking neural networks (SNNs) offer very low power consumption with satisfactory performance on complex artificial intelligence tasks. In such neural networks, synaptic weights can be stored in nonvolatile memories. The latter are massively read during inference, which can lead to device failure. In this context, a 1S1R ( 1Selector 1 Resistor) device composed of a HfO 2 -based OxRAM memory stacked on a Ge-Se-Sb-N-based ovonic threshold switch (OTS) back-end selector is proposed for high-density binarized SNNs (BSNNs) synaptic weight hardware implementation. An extensive experimental statistical study combined with a novel Monte Carlo model allows to deeply analyze the OTS switching dynamics based on field-driven stochastic nucleation of conductive dots in the layer. This allows quantifying the occurrence frequency of OTS erratic switching as a function of the applied voltages and 1S1R reading frequency. The associated 1S1R reading error rate is calculated. Focusing on the standard machine learning MNIST image recognition task, BSNN figures of merit (footprint, electrical consumption during inference, frequency of inference, accuracy, and tolerance to errors) are optimized by engineering the network topology, training procedure, and activations sparsity.