Thanks to the advance in transistor scaling, computing capabilities of central processing units (CPUs) and graphics processing units (GPUs) have been increased greatly, resulting in the fast development of artificial neural networks (ANNs). ANNs predict the correlation between a set of input and output parameters by emulating the principles of biological neural networks. [1] Nowadays, ANNs have been applied to a broad spectrum, such as computer vision, natural language processing, autonomous driving, and decision making. Although ANNs are advantageous in some specific tasks, such as playing the game of Go, [2,3] the human brain outperforms ANNs in cognitive and classification tasks with fast speed and high power efficiency. The advantages of the human brain lie in the special architectures with high massive parallelism, spiking neurons, and enormous number of basic computing units of neurons (%10 11) and synapses (%10 15), whereas computers show high speed of basic operation and precision. [4] The massively parallel processing compensates the low speed of neurons and synapses, whereas spiking neurons compute more powerful than other types of neurons. [5] The parallel processing occurs based on variable synaptic weights, which is the basis of memory. Moreover, the same information is processed by many neurons and transmitted to a downstream neuron, which enhances the precision as well. Therefore, the human brain shows a salient feature, i.e., in-memory computing, suitable for high-performance applications with a large amount of data. In contrast, conventional digital computing systems face the Von Neumann bottleneck, because in the Von Neumann architecture, computing units and memory are separated, hindering the enhancement of ANNs. Thus, several novel architectures are promoted, such as brain-inspired computing (processing information based on spiking neurons, also called as spiking neural networks [SNNs]) and in-memory computing on chip, to overcome the Von Neumann bottleneck. [6] To implement the new architectures, artificial synapses and neurons are necessary. The massive parallelism of the brain takes advantage of a large number of neurons and synapses; however, the emulation of the functions of biological synapses and neurons requires tens of complement metal-oxide-semiconductor (CMOS) transistors. Therefore, it is urgent to realize synapses and neurons with a simplified method for the development of ANNs. [7] As synapses adjust their weights step-by-step repeatedly, synaptic devices need to exhibit analog switching behavior. [8] However, biological neurons show the analog membrane potential and generate the all-or-nothing rapidly, so both analog and digital devices are acceptable for the emulation of neurons. Many emerging devices have been proposed to develop ANNs based on hardware, e.g., memristive devices, phase change memories (PCMs), [9] magnetic random-access memories (MRAMs), [10] and ferroelectric field-effect transistors (FeFETs). [11] Each technology has its advantages and disadvantages. PCM and M...