as the thermal budget has increased rapidly in recent years, which could surpass the future revenue generated worldwide by the semiconductor industry. [4,5] Consequently, electronic miniaturization limits the sustainable growth of computing technology. The von Neumann design is a conventional computing architecture, which separates the processor and memory unit. This architecture performs a computational task through sequential procedures and has functioned a mainstay of modern computing since 1945. [6] In fact, the architecture is significantly beneficial to both hardware and software developers because each component can be improved and readily extended into a united electronic system, even without a comprehensive understanding of all the components. However, the von Neumann bottleneck generates substantial power consumption and latency during the computing operation. [2,7] This limitation results from data transmission between two functional units (processor and memory), particularly when the memory is accessed through a bus with restricted bandwidth. [2,7] Moreover, according to the International Data Corporation (IDC), global data will rapidly increase to 175 ZB (1.75 × 10 23 B) by 2025. [8] Consequently, the von Neumann bottleneck will be more detrimental under tremendous workload because it may be produced by the daily operation of the design. Recently, there has been increasing demand for intellectual computers that can efficiently process substantial global data, thereby resulting in the development of a wide range of softwareor hardware-based artificial neural networks (ANNs) aimed at achieving the computing ability of the human brain. [2,9] Softwarebased ANNs have recently exhibited remarkable capabilities, such as image recognition, [10,11] natural language processing, [12,13] and performing specific tasks, [14] some beyond the human level. However, since existing ANNs are built on conventional computing architecture, that is, the von Neumann architecture, the learning parameters stored in memory are iteratively introduced to the processor to perform a task. The bottleneck issue arising from the movement of large datasets would eventually reduce the energy and time efficiency when operating the ANN software. [2,7] To alleviate this problem, several advanced software-and hardware-based ANN approaches have been suggested. [2,7,15-17] Advanced algorithms, such as network pruning, quantization, Huffman coding, and knowledge distillation, have been Memristors have recently attracted significant interest due to their applicability as promising building blocks of neuromorphic computing and electronic systems. The dynamic reconfiguration of memristors, which is based on the history of applied electrical stimuli, can mimic both essential analog synaptic and neuronal functionalities. These can be utilized as the node and terminal devices in an artificial neural network. Consequently, the ability to understand, control, and utilize fundamental switching principles and various types of device architectures of the...