increasing the requirements of smart data computing and storage. However, for the conventional von Neumann computer architecture, owing to the separation between memory and computation functions, the data calculation rules and transmission speed between the central processing unit and memory segments limit the overall efficiency and computation time and dissipate a high processing power, which cannot satisfy the scenario of real-time applications. [1] By contrast, the human brain is a complex organ composed of ≈100 billion neurons and 60 trillion synapses with an average consumed power of ≈20 W and superior learning and cognitive abilities. [2] Therefore, an entirely new architecture concept of artificial intelligence (AI) has to be proposed to solve the limitations of latency and the issues of power consumption. For this purpose, deep learning is the core of AI based on the artificial neural networks, which is designed to mimic how the brain works. The neural networks consist of synapses, neurons, pulsed neural networks, perceptual systems, and so on. To simulate synapses, some nonvolatile memory devices have been proposed such as floating-gate (FG) flash memory, phase-change memory (PCM), memristor in resistive random-access memory (RRAM), and ferroic tunnel junctions. [3-6] Since the scaling of