The human brain is widely considered one of the most intricate and advanced information-processing systems in the world. It comprises over 86 billion neurons, each capable of forming up to 10,000 synapses with other neurons, resulting in an exceptionally complex network of connections that allows for the proliferation of intelligence. Along with the physiological complexity, the human brain exhibits a wide range of characteristics that contribute to its remarkable functional capabilities. For example, it can integrate data from multiple sensory modalities, such as vision, hearing, and touch, allowing it to form a coherent perception of the world. The brain's ability to perform parallel processing is also essential for efficiently handling multiple information streams simultaneously. This is fulfilled via the connections and real-time communications among different brain regions, though the mechanism is not fully understood. Besides, the brain is highly adaptable, capable of reorganizing its structure and function in response to changing environments and experiences. This property, known as neuroplasticity, enables the brain to learn and develop new skills throughout life. The human brain is also notable for its high-level cognitive functions, such as problem-solving, decision-making, creativity, and abstract reasoning, supported by the prefrontal cortex, a brain region that is particularly well-developed in humans.Creating artificial general intelligence (AGI) system that has human-level or even higher intelligence and is capable of performing a wide range of intellectual tasks, such as reasoning, problem-solving, and creativity, is the pursuit of humanity for centuries, which can date back to the mid-20th century. In the 1940s, pioneers such as Alan Turing developed early ideas about computing machines and the potential for them to simulate human thinking [1]. From then on, seeking to replicate the principles of human intelligence in artificial systems has significantly promoted the development of AGI and the corresponding applications. These principles include the structure and function of neural networks, the plasticity of synaptic connections, the dynamics of neural activity, and more. In 1943, McCulloch and Pitts proposed the very first mathematical model of an artificial neuron [2], also known as McCulloch-Pitts (MCP) Neuron. Inspired by the Hebbian theory of synaptic plasticity, Frank Rosenblatt came up with the perceptron, a major improvement over the MCP neuron model [3], and showed that by relaxing some of the MCP's rules artificial neurons could actually learn from data. However, the research of artificial neural network had stagnated until the backprogation was proposed by Werbos in 1975 [4]. Backpropagation was inspired by the way the brain modifies the strengths of connections between neurons to learn and improve its performance through synaptic plasticity. Backpropagation attempts to mimic this process by adjusting the weights (synaptic strengths) between neurons in an artificial neural network. De...