as the thermal budget has increased rapidly in recent years, which could surpass the future revenue generated worldwide by the semiconductor industry. [4,5] Consequently, electronic miniaturization limits the sustainable growth of computing technology. The von Neumann design is a conventional computing architecture, which separates the processor and memory unit. This architecture performs a computational task through sequential procedures and has functioned a mainstay of modern computing since 1945. [6] In fact, the architecture is significantly beneficial to both hardware and software developers because each component can be improved and readily extended into a united electronic system, even without a comprehensive understanding of all the components. However, the von Neumann bottleneck generates substantial power consumption and latency during the computing operation. [2,7] This limitation results from data transmission between two functional units (processor and memory), particularly when the memory is accessed through a bus with restricted bandwidth. [2,7] Moreover, according to the International Data Corporation (IDC), global data will rapidly increase to 175 ZB (1.75 × 10 23 B) by 2025. [8] Consequently, the von Neumann bottleneck will be more detrimental under tremendous workload because it may be produced by the daily operation of the design. Recently, there has been increasing demand for intellectual computers that can efficiently process substantial global data, thereby resulting in the development of a wide range of softwareor hardware-based artificial neural networks (ANNs) aimed at achieving the computing ability of the human brain. [2,9] Softwarebased ANNs have recently exhibited remarkable capabilities, such as image recognition, [10,11] natural language processing, [12,13] and performing specific tasks, [14] some beyond the human level. However, since existing ANNs are built on conventional computing architecture, that is, the von Neumann architecture, the learning parameters stored in memory are iteratively introduced to the processor to perform a task. The bottleneck issue arising from the movement of large datasets would eventually reduce the energy and time efficiency when operating the ANN software. [2,7] To alleviate this problem, several advanced software-and hardware-based ANN approaches have been suggested. [2,7,15-17] Advanced algorithms, such as network pruning, quantization, Huffman coding, and knowledge distillation, have been Memristors have recently attracted significant interest due to their applicability as promising building blocks of neuromorphic computing and electronic systems. The dynamic reconfiguration of memristors, which is based on the history of applied electrical stimuli, can mimic both essential analog synaptic and neuronal functionalities. These can be utilized as the node and terminal devices in an artificial neural network. Consequently, the ability to understand, control, and utilize fundamental switching principles and various types of device architectures of the...
Modern artificial neural network technology using a deterministic computing framework is faced with a critical challenge in dealing with massive data that are largely unstructured and ambiguous. This challenge demands the advances of an elementary physical device for tackling these uncertainties. Here, we designed and fabricated a SiOx nanorod memristive device by employing the glancing angle deposition (GLAD) technique, suggesting a controllable stochastic artificial neuron that can mimic the fundamental integrate‐and‐fire signaling and stochastic dynamics of a biological neuron. The nanorod structure provides the random distribution of multiple nanopores all across the active area, capable of forming a multitude of Si filaments at many SiOx nanorod edges after the electromigration process, leading to a stochastic switching event with very high dynamic range (≈5.15 × 1010) and low energy (≈4.06 pJ). Different probabilistic activation (ProbAct) functions in a sigmoid form are implemented, showing its controllability with low variation by manufacturing and electrical programming schemes. Furthermore, as an application prospect, based on the suggested memristive neuron, we demonstrated the self‐resting neural operation with the local circuit configuration and revealed probabilistic Bayesian inferences for genetic regulatory networks with low normalized mean squared errors (≈2.41 × 10‐2) and its robustness to the ProbAct variation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.